00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2387 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3648 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.062 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.063 The recommended git tool is: git 00:00:00.063 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.081 Fetching changes from the remote Git repository 00:00:00.083 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.102 Using shallow fetch with depth 1 00:00:00.102 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.102 > git --version # timeout=10 00:00:00.129 > git --version # 'git version 2.39.2' 00:00:00.129 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.158 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.158 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.330 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.340 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.351 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.351 > git config core.sparsecheckout # timeout=10 00:00:03.362 > git read-tree -mu HEAD # timeout=10 00:00:03.379 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.396 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.396 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.470 [Pipeline] Start of Pipeline 00:00:03.483 [Pipeline] library 00:00:03.484 Loading library shm_lib@master 00:00:03.484 Library shm_lib@master is cached. Copying from home. 00:00:03.498 [Pipeline] node 00:00:03.505 Running on WFP3 in /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:03.506 [Pipeline] { 00:00:03.512 [Pipeline] catchError 00:00:03.513 [Pipeline] { 00:00:03.522 [Pipeline] wrap 00:00:03.527 [Pipeline] { 00:00:03.533 [Pipeline] stage 00:00:03.534 [Pipeline] { (Prologue) 00:00:03.720 [Pipeline] sh 00:00:04.019 + logger -p user.info -t JENKINS-CI 00:00:04.040 [Pipeline] echo 00:00:04.042 Node: WFP3 00:00:04.051 [Pipeline] sh 00:00:04.391 [Pipeline] setCustomBuildProperty 00:00:04.404 [Pipeline] echo 00:00:04.406 Cleanup processes 00:00:04.412 [Pipeline] sh 00:00:04.706 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:04.706 8218 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:04.719 [Pipeline] sh 00:00:05.011 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:05.011 ++ grep -v 'sudo pgrep' 00:00:05.011 ++ awk '{print $1}' 00:00:05.011 + sudo kill -9 00:00:05.011 + true 00:00:05.030 [Pipeline] cleanWs 00:00:05.039 [WS-CLEANUP] Deleting project workspace... 00:00:05.039 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.045 [WS-CLEANUP] done 00:00:05.048 [Pipeline] setCustomBuildProperty 00:00:05.058 [Pipeline] sh 00:00:05.342 + sudo git config --global --replace-all safe.directory '*' 00:00:05.424 [Pipeline] httpRequest 00:00:07.598 [Pipeline] echo 00:00:07.599 Sorcerer 10.211.164.20 is alive 00:00:07.607 [Pipeline] retry 00:00:07.608 [Pipeline] { 00:00:07.621 [Pipeline] httpRequest 00:00:07.626 HttpMethod: GET 00:00:07.626 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.627 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.643 Response Code: HTTP/1.1 200 OK 00:00:07.644 Success: Status code 200 is in the accepted range: 200,404 00:00:07.644 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.167 [Pipeline] } 00:00:12.185 [Pipeline] // retry 00:00:12.193 [Pipeline] sh 00:00:12.485 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.503 [Pipeline] httpRequest 00:00:13.056 [Pipeline] echo 00:00:13.059 Sorcerer 10.211.164.20 is alive 00:00:13.069 [Pipeline] retry 00:00:13.071 [Pipeline] { 00:00:13.085 [Pipeline] httpRequest 00:00:13.090 HttpMethod: GET 00:00:13.091 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.092 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.107 Response Code: HTTP/1.1 200 OK 00:00:13.107 Success: Status code 200 is in the accepted range: 200,404 00:00:13.108 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:13.292 [Pipeline] } 00:01:13.309 [Pipeline] // retry 00:01:13.317 [Pipeline] sh 00:01:13.611 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:16.169 [Pipeline] sh 00:01:16.458 + git -C spdk log --oneline -n5 00:01:16.458 c13c99a5e test: Various fixes for Fedora40 00:01:16.458 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:16.458 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:16.458 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:16.458 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:16.470 [Pipeline] } 00:01:16.483 [Pipeline] // stage 00:01:16.491 [Pipeline] stage 00:01:16.494 [Pipeline] { (Prepare) 00:01:16.513 [Pipeline] writeFile 00:01:16.527 [Pipeline] sh 00:01:16.813 + logger -p user.info -t JENKINS-CI 00:01:16.826 [Pipeline] sh 00:01:17.112 + logger -p user.info -t JENKINS-CI 00:01:17.125 [Pipeline] sh 00:01:17.415 + cat autorun-spdk.conf 00:01:17.415 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.415 SPDK_TEST_NVMF=1 00:01:17.415 SPDK_TEST_NVME_CLI=1 00:01:17.415 SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:17.415 SPDK_TEST_NVMF_NICS=e810 00:01:17.415 SPDK_RUN_UBSAN=1 00:01:17.424 RUN_NIGHTLY=1 00:01:17.428 [Pipeline] readFile 00:01:17.453 [Pipeline] withEnv 00:01:17.455 [Pipeline] { 00:01:17.468 [Pipeline] sh 00:01:17.803 + set -ex 00:01:17.803 + [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf ]] 00:01:17.803 + source /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:17.803 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.803 ++ SPDK_TEST_NVMF=1 00:01:17.803 ++ SPDK_TEST_NVME_CLI=1 00:01:17.803 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:17.803 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.803 ++ SPDK_RUN_UBSAN=1 00:01:17.803 ++ RUN_NIGHTLY=1 00:01:17.803 + case $SPDK_TEST_NVMF_NICS in 00:01:17.803 + DRIVERS=ice 00:01:17.803 + [[ rdma == \r\d\m\a ]] 00:01:17.803 + DRIVERS+=' irdma' 00:01:17.803 + [[ -n ice irdma ]] 00:01:17.803 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.803 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.803 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:17.803 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.803 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.803 + true 00:01:17.803 + for D in $DRIVERS 00:01:17.803 + sudo modprobe ice 00:01:17.803 + for D in $DRIVERS 00:01:17.803 + sudo modprobe irdma 00:01:18.064 + exit 0 00:01:18.073 [Pipeline] } 00:01:18.089 [Pipeline] // withEnv 00:01:18.094 [Pipeline] } 00:01:18.108 [Pipeline] // stage 00:01:18.117 [Pipeline] catchError 00:01:18.119 [Pipeline] { 00:01:18.131 [Pipeline] timeout 00:01:18.131 Timeout set to expire in 1 hr 0 min 00:01:18.133 [Pipeline] { 00:01:18.146 [Pipeline] stage 00:01:18.147 [Pipeline] { (Tests) 00:01:18.160 [Pipeline] sh 00:01:18.450 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:18.450 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:18.450 + DIR_ROOT=/var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:18.450 + [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest ]] 00:01:18.450 + DIR_SPDK=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:18.450 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:01:18.450 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk ]] 00:01:18.450 + [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:01:18.450 + mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:01:18.450 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:01:18.450 + [[ nvmf-cvl-phy-autotest == pkgdep-* ]] 00:01:18.450 + cd /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:18.450 + source /etc/os-release 00:01:18.450 ++ NAME='Fedora Linux' 00:01:18.450 ++ VERSION='39 (Cloud Edition)' 00:01:18.450 ++ ID=fedora 00:01:18.450 ++ VERSION_ID=39 00:01:18.450 ++ VERSION_CODENAME= 00:01:18.450 ++ PLATFORM_ID=platform:f39 00:01:18.450 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:18.450 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.450 ++ LOGO=fedora-logo-icon 00:01:18.450 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:18.450 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.450 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:18.450 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.450 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.450 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.450 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:18.450 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.450 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:18.450 ++ SUPPORT_END=2024-11-12 00:01:18.450 ++ VARIANT='Cloud Edition' 00:01:18.450 ++ VARIANT_ID=cloud 00:01:18.450 + uname -a 00:01:18.450 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:18.450 + sudo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:01:20.993 Hugepages 00:01:20.993 node hugesize free / total 00:01:20.993 node0 1048576kB 0 / 0 00:01:20.993 node0 2048kB 0 / 0 00:01:20.993 node1 1048576kB 0 / 0 00:01:20.993 node1 2048kB 0 / 0 00:01:20.993 00:01:20.993 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.993 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:20.993 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:20.993 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:20.993 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:20.993 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:20.993 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:20.994 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:20.994 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:20.994 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:20.994 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:01:20.994 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:20.994 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:20.994 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:20.994 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:20.994 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:20.994 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:20.994 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:20.994 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:20.994 + rm -f /tmp/spdk-ld-path 00:01:20.994 + source autorun-spdk.conf 00:01:20.994 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.994 ++ SPDK_TEST_NVMF=1 00:01:20.994 ++ SPDK_TEST_NVME_CLI=1 00:01:20.994 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:20.994 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.994 ++ SPDK_RUN_UBSAN=1 00:01:20.994 ++ RUN_NIGHTLY=1 00:01:20.994 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.994 + [[ -n '' ]] 00:01:20.994 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:20.994 + for M in /var/spdk/build-*-manifest.txt 00:01:20.994 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:20.994 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:20.994 + for M in /var/spdk/build-*-manifest.txt 00:01:20.994 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.994 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:20.994 + for M in /var/spdk/build-*-manifest.txt 00:01:20.994 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.994 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:20.994 ++ uname 00:01:20.994 + [[ Linux == \L\i\n\u\x ]] 00:01:20.994 + sudo dmesg -T 00:01:20.994 + sudo dmesg --clear 00:01:20.994 + dmesg_pid=9247 00:01:20.994 + [[ Fedora Linux == FreeBSD ]] 00:01:20.994 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.994 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.994 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.994 + sudo dmesg -Tw 00:01:20.994 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.994 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.994 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.994 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\c\v\l\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.994 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.994 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.994 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.994 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.994 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.994 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.994 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.994 + spdk/autorun.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:20.994 Test configuration: 00:01:20.994 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.994 SPDK_TEST_NVMF=1 00:01:20.994 SPDK_TEST_NVME_CLI=1 00:01:20.994 SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:20.994 SPDK_TEST_NVMF_NICS=e810 00:01:20.994 SPDK_RUN_UBSAN=1 00:01:20.994 RUN_NIGHTLY=1 04:59:17 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:20.994 04:59:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:01:20.994 04:59:17 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.994 04:59:17 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.994 04:59:17 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.994 04:59:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.994 04:59:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.994 04:59:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.994 04:59:17 -- paths/export.sh@5 -- $ export PATH 00:01:20.994 04:59:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.994 04:59:17 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:01:20.994 04:59:17 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:20.994 04:59:17 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732075157.XXXXXX 00:01:20.994 04:59:17 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732075157.PYF464 00:01:20.994 04:59:17 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:20.994 04:59:17 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:20.994 04:59:17 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/' 00:01:20.994 04:59:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:20.994 04:59:17 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.254 04:59:17 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:21.254 04:59:17 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:21.254 04:59:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.254 04:59:17 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:21.254 04:59:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.254 04:59:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.254 04:59:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:21.254 04:59:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.254 Wed Nov 20 03:59:17 AM UTC 2024 00:01:21.254 04:59:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.254 LTS-67-gc13c99a5e 00:01:21.254 04:59:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.254 04:59:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.254 04:59:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.254 04:59:17 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:21.254 04:59:17 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:21.254 04:59:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.254 ************************************ 00:01:21.254 START TEST ubsan 00:01:21.254 ************************************ 00:01:21.255 04:59:17 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:21.255 using ubsan 00:01:21.255 00:01:21.255 real 0m0.000s 00:01:21.255 user 0m0.000s 00:01:21.255 sys 0m0.000s 00:01:21.255 04:59:17 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:21.255 04:59:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.255 ************************************ 00:01:21.255 END TEST ubsan 00:01:21.255 ************************************ 00:01:21.255 04:59:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.255 04:59:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.255 04:59:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.255 04:59:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.255 04:59:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.255 04:59:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.255 04:59:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.255 04:59:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.255 04:59:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:21.515 Using default SPDK env in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:01:21.515 Using default DPDK in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:01:22.456 Using 'verbs' RDMA provider 00:01:38.302 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:48.299 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:48.559 Creating mk/config.mk...done. 00:01:48.559 Creating mk/cc.flags.mk...done. 00:01:48.559 Type 'make' to build. 00:01:48.559 04:59:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:48.559 04:59:45 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:48.559 04:59:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:48.559 04:59:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.559 ************************************ 00:01:48.559 START TEST make 00:01:48.559 ************************************ 00:01:48.559 04:59:45 -- common/autotest_common.sh@1114 -- $ make -j96 00:01:48.820 make[1]: Nothing to be done for 'all'. 00:01:56.957 The Meson build system 00:01:56.957 Version: 1.5.0 00:01:56.957 Source dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk 00:01:56.957 Build dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp 00:01:56.957 Build type: native build 00:01:56.957 Program cat found: YES (/usr/bin/cat) 00:01:56.957 Project name: DPDK 00:01:56.957 Project version: 23.11.0 00:01:56.957 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.957 C linker for the host machine: cc ld.bfd 2.40-14 00:01:56.957 Host machine cpu family: x86_64 00:01:56.957 Host machine cpu: x86_64 00:01:56.957 Message: ## Building in Developer Mode ## 00:01:56.957 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:56.957 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:56.957 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:56.957 Program python3 found: YES (/usr/bin/python3) 00:01:56.957 Program cat found: YES (/usr/bin/cat) 00:01:56.957 Compiler for C supports arguments -march=native: YES 00:01:56.957 Checking for size of "void *" : 8 00:01:56.957 Checking for size of "void *" : 8 (cached) 00:01:56.957 Library m found: YES 00:01:56.957 Library numa found: YES 00:01:56.957 Has header "numaif.h" : YES 00:01:56.957 Library fdt found: NO 00:01:56.957 Library execinfo found: NO 00:01:56.957 Has header "execinfo.h" : YES 00:01:56.957 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.957 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:56.957 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:56.957 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:56.957 Run-time dependency openssl found: YES 3.1.1 00:01:56.957 Run-time dependency libpcap found: YES 1.10.4 00:01:56.957 Has header "pcap.h" with dependency libpcap: YES 00:01:56.957 Compiler for C supports arguments -Wcast-qual: YES 00:01:56.957 Compiler for C supports arguments -Wdeprecated: YES 00:01:56.957 Compiler for C supports arguments -Wformat: YES 00:01:56.957 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:56.957 Compiler for C supports arguments -Wformat-security: NO 00:01:56.957 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.957 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:56.957 Compiler for C supports arguments -Wnested-externs: YES 00:01:56.957 Compiler for C supports arguments -Wold-style-definition: YES 00:01:56.957 Compiler for C supports arguments -Wpointer-arith: YES 00:01:56.957 Compiler for C supports arguments -Wsign-compare: YES 00:01:56.957 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:56.957 Compiler for C supports arguments -Wundef: YES 00:01:56.957 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.957 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:56.957 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:56.957 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.957 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:56.957 Program objdump found: YES (/usr/bin/objdump) 00:01:56.957 Compiler for C supports arguments -mavx512f: YES 00:01:56.957 Checking if "AVX512 checking" compiles: YES 00:01:56.957 Fetching value of define "__SSE4_2__" : 1 00:01:56.957 Fetching value of define "__AES__" : 1 00:01:56.957 Fetching value of define "__AVX__" : 1 00:01:56.957 Fetching value of define "__AVX2__" : 1 00:01:56.957 Fetching value of define "__AVX512BW__" : 1 00:01:56.957 Fetching value of define "__AVX512CD__" : 1 00:01:56.957 Fetching value of define "__AVX512DQ__" : 1 00:01:56.957 Fetching value of define "__AVX512F__" : 1 00:01:56.957 Fetching value of define "__AVX512VL__" : 1 00:01:56.957 Fetching value of define "__PCLMUL__" : 1 00:01:56.957 Fetching value of define "__RDRND__" : 1 00:01:56.957 Fetching value of define "__RDSEED__" : 1 00:01:56.957 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:56.957 Fetching value of define "__znver1__" : (undefined) 00:01:56.957 Fetching value of define "__znver2__" : (undefined) 00:01:56.957 Fetching value of define "__znver3__" : (undefined) 00:01:56.957 Fetching value of define "__znver4__" : (undefined) 00:01:56.957 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:56.957 Message: lib/log: Defining dependency "log" 00:01:56.957 Message: lib/kvargs: Defining dependency "kvargs" 00:01:56.957 Message: lib/telemetry: Defining dependency "telemetry" 00:01:56.957 Checking for function "getentropy" : NO 00:01:56.957 Message: lib/eal: Defining dependency "eal" 00:01:56.958 Message: lib/ring: Defining dependency "ring" 00:01:56.958 Message: lib/rcu: Defining dependency "rcu" 00:01:56.958 Message: lib/mempool: Defining dependency "mempool" 00:01:56.958 Message: lib/mbuf: Defining dependency "mbuf" 00:01:56.958 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:56.958 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:56.958 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:56.958 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:56.958 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:56.958 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:56.958 Compiler for C supports arguments -mpclmul: YES 00:01:56.958 Compiler for C supports arguments -maes: YES 00:01:56.958 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:56.958 Compiler for C supports arguments -mavx512bw: YES 00:01:56.958 Compiler for C supports arguments -mavx512dq: YES 00:01:56.958 Compiler for C supports arguments -mavx512vl: YES 00:01:56.958 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:56.958 Compiler for C supports arguments -mavx2: YES 00:01:56.958 Compiler for C supports arguments -mavx: YES 00:01:56.958 Message: lib/net: Defining dependency "net" 00:01:56.958 Message: lib/meter: Defining dependency "meter" 00:01:56.958 Message: lib/ethdev: Defining dependency "ethdev" 00:01:56.958 Message: lib/pci: Defining dependency "pci" 00:01:56.958 Message: lib/cmdline: Defining dependency "cmdline" 00:01:56.958 Message: lib/hash: Defining dependency "hash" 00:01:56.958 Message: lib/timer: Defining dependency "timer" 00:01:56.958 Message: lib/compressdev: Defining dependency "compressdev" 00:01:56.958 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:56.958 Message: lib/dmadev: Defining dependency "dmadev" 00:01:56.958 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:56.958 Message: lib/power: Defining dependency "power" 00:01:56.958 Message: lib/reorder: Defining dependency "reorder" 00:01:56.958 Message: lib/security: Defining dependency "security" 00:01:56.958 Has header "linux/userfaultfd.h" : YES 00:01:56.958 Has header "linux/vduse.h" : YES 00:01:56.958 Message: lib/vhost: Defining dependency "vhost" 00:01:56.958 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:56.958 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:56.958 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:56.958 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:56.958 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:56.958 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:56.958 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:56.958 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:56.958 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:56.958 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:56.958 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:56.958 Configuring doxy-api-html.conf using configuration 00:01:56.958 Configuring doxy-api-man.conf using configuration 00:01:56.958 Program mandb found: YES (/usr/bin/mandb) 00:01:56.958 Program sphinx-build found: NO 00:01:56.958 Configuring rte_build_config.h using configuration 00:01:56.958 Message: 00:01:56.958 ================= 00:01:56.958 Applications Enabled 00:01:56.958 ================= 00:01:56.958 00:01:56.958 apps: 00:01:56.958 00:01:56.958 00:01:56.958 Message: 00:01:56.958 ================= 00:01:56.958 Libraries Enabled 00:01:56.958 ================= 00:01:56.958 00:01:56.958 libs: 00:01:56.958 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:56.958 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:56.958 cryptodev, dmadev, power, reorder, security, vhost, 00:01:56.958 00:01:56.958 Message: 00:01:56.958 =============== 00:01:56.958 Drivers Enabled 00:01:56.958 =============== 00:01:56.958 00:01:56.958 common: 00:01:56.958 00:01:56.958 bus: 00:01:56.958 pci, vdev, 00:01:56.958 mempool: 00:01:56.958 ring, 00:01:56.958 dma: 00:01:56.958 00:01:56.958 net: 00:01:56.958 00:01:56.958 crypto: 00:01:56.958 00:01:56.958 compress: 00:01:56.958 00:01:56.958 vdpa: 00:01:56.958 00:01:56.958 00:01:56.958 Message: 00:01:56.958 ================= 00:01:56.958 Content Skipped 00:01:56.958 ================= 00:01:56.958 00:01:56.958 apps: 00:01:56.958 dumpcap: explicitly disabled via build config 00:01:56.958 graph: explicitly disabled via build config 00:01:56.958 pdump: explicitly disabled via build config 00:01:56.958 proc-info: explicitly disabled via build config 00:01:56.958 test-acl: explicitly disabled via build config 00:01:56.958 test-bbdev: explicitly disabled via build config 00:01:56.958 test-cmdline: explicitly disabled via build config 00:01:56.958 test-compress-perf: explicitly disabled via build config 00:01:56.958 test-crypto-perf: explicitly disabled via build config 00:01:56.958 test-dma-perf: explicitly disabled via build config 00:01:56.958 test-eventdev: explicitly disabled via build config 00:01:56.958 test-fib: explicitly disabled via build config 00:01:56.958 test-flow-perf: explicitly disabled via build config 00:01:56.958 test-gpudev: explicitly disabled via build config 00:01:56.958 test-mldev: explicitly disabled via build config 00:01:56.958 test-pipeline: explicitly disabled via build config 00:01:56.958 test-pmd: explicitly disabled via build config 00:01:56.958 test-regex: explicitly disabled via build config 00:01:56.958 test-sad: explicitly disabled via build config 00:01:56.958 test-security-perf: explicitly disabled via build config 00:01:56.958 00:01:56.958 libs: 00:01:56.958 metrics: explicitly disabled via build config 00:01:56.958 acl: explicitly disabled via build config 00:01:56.958 bbdev: explicitly disabled via build config 00:01:56.958 bitratestats: explicitly disabled via build config 00:01:56.958 bpf: explicitly disabled via build config 00:01:56.958 cfgfile: explicitly disabled via build config 00:01:56.958 distributor: explicitly disabled via build config 00:01:56.958 efd: explicitly disabled via build config 00:01:56.958 eventdev: explicitly disabled via build config 00:01:56.958 dispatcher: explicitly disabled via build config 00:01:56.958 gpudev: explicitly disabled via build config 00:01:56.958 gro: explicitly disabled via build config 00:01:56.958 gso: explicitly disabled via build config 00:01:56.958 ip_frag: explicitly disabled via build config 00:01:56.958 jobstats: explicitly disabled via build config 00:01:56.958 latencystats: explicitly disabled via build config 00:01:56.958 lpm: explicitly disabled via build config 00:01:56.958 member: explicitly disabled via build config 00:01:56.958 pcapng: explicitly disabled via build config 00:01:56.958 rawdev: explicitly disabled via build config 00:01:56.958 regexdev: explicitly disabled via build config 00:01:56.958 mldev: explicitly disabled via build config 00:01:56.958 rib: explicitly disabled via build config 00:01:56.958 sched: explicitly disabled via build config 00:01:56.958 stack: explicitly disabled via build config 00:01:56.958 ipsec: explicitly disabled via build config 00:01:56.958 pdcp: explicitly disabled via build config 00:01:56.958 fib: explicitly disabled via build config 00:01:56.958 port: explicitly disabled via build config 00:01:56.958 pdump: explicitly disabled via build config 00:01:56.958 table: explicitly disabled via build config 00:01:56.958 pipeline: explicitly disabled via build config 00:01:56.958 graph: explicitly disabled via build config 00:01:56.958 node: explicitly disabled via build config 00:01:56.958 00:01:56.958 drivers: 00:01:56.958 common/cpt: not in enabled drivers build config 00:01:56.958 common/dpaax: not in enabled drivers build config 00:01:56.958 common/iavf: not in enabled drivers build config 00:01:56.958 common/idpf: not in enabled drivers build config 00:01:56.958 common/mvep: not in enabled drivers build config 00:01:56.958 common/octeontx: not in enabled drivers build config 00:01:56.958 bus/auxiliary: not in enabled drivers build config 00:01:56.958 bus/cdx: not in enabled drivers build config 00:01:56.958 bus/dpaa: not in enabled drivers build config 00:01:56.958 bus/fslmc: not in enabled drivers build config 00:01:56.958 bus/ifpga: not in enabled drivers build config 00:01:56.958 bus/platform: not in enabled drivers build config 00:01:56.958 bus/vmbus: not in enabled drivers build config 00:01:56.958 common/cnxk: not in enabled drivers build config 00:01:56.958 common/mlx5: not in enabled drivers build config 00:01:56.958 common/nfp: not in enabled drivers build config 00:01:56.958 common/qat: not in enabled drivers build config 00:01:56.958 common/sfc_efx: not in enabled drivers build config 00:01:56.958 mempool/bucket: not in enabled drivers build config 00:01:56.958 mempool/cnxk: not in enabled drivers build config 00:01:56.958 mempool/dpaa: not in enabled drivers build config 00:01:56.958 mempool/dpaa2: not in enabled drivers build config 00:01:56.958 mempool/octeontx: not in enabled drivers build config 00:01:56.958 mempool/stack: not in enabled drivers build config 00:01:56.958 dma/cnxk: not in enabled drivers build config 00:01:56.958 dma/dpaa: not in enabled drivers build config 00:01:56.958 dma/dpaa2: not in enabled drivers build config 00:01:56.958 dma/hisilicon: not in enabled drivers build config 00:01:56.958 dma/idxd: not in enabled drivers build config 00:01:56.958 dma/ioat: not in enabled drivers build config 00:01:56.958 dma/skeleton: not in enabled drivers build config 00:01:56.958 net/af_packet: not in enabled drivers build config 00:01:56.958 net/af_xdp: not in enabled drivers build config 00:01:56.958 net/ark: not in enabled drivers build config 00:01:56.958 net/atlantic: not in enabled drivers build config 00:01:56.958 net/avp: not in enabled drivers build config 00:01:56.958 net/axgbe: not in enabled drivers build config 00:01:56.958 net/bnx2x: not in enabled drivers build config 00:01:56.958 net/bnxt: not in enabled drivers build config 00:01:56.958 net/bonding: not in enabled drivers build config 00:01:56.958 net/cnxk: not in enabled drivers build config 00:01:56.959 net/cpfl: not in enabled drivers build config 00:01:56.959 net/cxgbe: not in enabled drivers build config 00:01:56.959 net/dpaa: not in enabled drivers build config 00:01:56.959 net/dpaa2: not in enabled drivers build config 00:01:56.959 net/e1000: not in enabled drivers build config 00:01:56.959 net/ena: not in enabled drivers build config 00:01:56.959 net/enetc: not in enabled drivers build config 00:01:56.959 net/enetfec: not in enabled drivers build config 00:01:56.959 net/enic: not in enabled drivers build config 00:01:56.959 net/failsafe: not in enabled drivers build config 00:01:56.959 net/fm10k: not in enabled drivers build config 00:01:56.959 net/gve: not in enabled drivers build config 00:01:56.959 net/hinic: not in enabled drivers build config 00:01:56.959 net/hns3: not in enabled drivers build config 00:01:56.959 net/i40e: not in enabled drivers build config 00:01:56.959 net/iavf: not in enabled drivers build config 00:01:56.959 net/ice: not in enabled drivers build config 00:01:56.959 net/idpf: not in enabled drivers build config 00:01:56.959 net/igc: not in enabled drivers build config 00:01:56.959 net/ionic: not in enabled drivers build config 00:01:56.959 net/ipn3ke: not in enabled drivers build config 00:01:56.959 net/ixgbe: not in enabled drivers build config 00:01:56.959 net/mana: not in enabled drivers build config 00:01:56.959 net/memif: not in enabled drivers build config 00:01:56.959 net/mlx4: not in enabled drivers build config 00:01:56.959 net/mlx5: not in enabled drivers build config 00:01:56.959 net/mvneta: not in enabled drivers build config 00:01:56.959 net/mvpp2: not in enabled drivers build config 00:01:56.959 net/netvsc: not in enabled drivers build config 00:01:56.959 net/nfb: not in enabled drivers build config 00:01:56.959 net/nfp: not in enabled drivers build config 00:01:56.959 net/ngbe: not in enabled drivers build config 00:01:56.959 net/null: not in enabled drivers build config 00:01:56.959 net/octeontx: not in enabled drivers build config 00:01:56.959 net/octeon_ep: not in enabled drivers build config 00:01:56.959 net/pcap: not in enabled drivers build config 00:01:56.959 net/pfe: not in enabled drivers build config 00:01:56.959 net/qede: not in enabled drivers build config 00:01:56.959 net/ring: not in enabled drivers build config 00:01:56.959 net/sfc: not in enabled drivers build config 00:01:56.959 net/softnic: not in enabled drivers build config 00:01:56.959 net/tap: not in enabled drivers build config 00:01:56.959 net/thunderx: not in enabled drivers build config 00:01:56.959 net/txgbe: not in enabled drivers build config 00:01:56.959 net/vdev_netvsc: not in enabled drivers build config 00:01:56.959 net/vhost: not in enabled drivers build config 00:01:56.959 net/virtio: not in enabled drivers build config 00:01:56.959 net/vmxnet3: not in enabled drivers build config 00:01:56.959 raw/*: missing internal dependency, "rawdev" 00:01:56.959 crypto/armv8: not in enabled drivers build config 00:01:56.959 crypto/bcmfs: not in enabled drivers build config 00:01:56.959 crypto/caam_jr: not in enabled drivers build config 00:01:56.959 crypto/ccp: not in enabled drivers build config 00:01:56.959 crypto/cnxk: not in enabled drivers build config 00:01:56.959 crypto/dpaa_sec: not in enabled drivers build config 00:01:56.959 crypto/dpaa2_sec: not in enabled drivers build config 00:01:56.959 crypto/ipsec_mb: not in enabled drivers build config 00:01:56.959 crypto/mlx5: not in enabled drivers build config 00:01:56.959 crypto/mvsam: not in enabled drivers build config 00:01:56.959 crypto/nitrox: not in enabled drivers build config 00:01:56.959 crypto/null: not in enabled drivers build config 00:01:56.959 crypto/octeontx: not in enabled drivers build config 00:01:56.959 crypto/openssl: not in enabled drivers build config 00:01:56.959 crypto/scheduler: not in enabled drivers build config 00:01:56.959 crypto/uadk: not in enabled drivers build config 00:01:56.959 crypto/virtio: not in enabled drivers build config 00:01:56.959 compress/isal: not in enabled drivers build config 00:01:56.959 compress/mlx5: not in enabled drivers build config 00:01:56.959 compress/octeontx: not in enabled drivers build config 00:01:56.959 compress/zlib: not in enabled drivers build config 00:01:56.959 regex/*: missing internal dependency, "regexdev" 00:01:56.959 ml/*: missing internal dependency, "mldev" 00:01:56.959 vdpa/ifc: not in enabled drivers build config 00:01:56.959 vdpa/mlx5: not in enabled drivers build config 00:01:56.959 vdpa/nfp: not in enabled drivers build config 00:01:56.959 vdpa/sfc: not in enabled drivers build config 00:01:56.959 event/*: missing internal dependency, "eventdev" 00:01:56.959 baseband/*: missing internal dependency, "bbdev" 00:01:56.959 gpu/*: missing internal dependency, "gpudev" 00:01:56.959 00:01:56.959 00:01:57.219 Build targets in project: 85 00:01:57.219 00:01:57.219 DPDK 23.11.0 00:01:57.219 00:01:57.219 User defined options 00:01:57.219 buildtype : debug 00:01:57.219 default_library : shared 00:01:57.219 libdir : lib 00:01:57.219 prefix : /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:01:57.219 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:57.219 c_link_args : 00:01:57.219 cpu_instruction_set: native 00:01:57.219 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:57.219 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:57.219 enable_docs : false 00:01:57.219 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:57.219 enable_kmods : false 00:01:57.219 tests : false 00:01:57.219 00:01:57.219 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.796 ninja: Entering directory `/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp' 00:01:57.796 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.796 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.796 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.796 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.796 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.796 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.796 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.796 [8/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.796 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.796 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.796 [11/265] Linking static target lib/librte_kvargs.a 00:01:57.796 [12/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.796 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.796 [14/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.058 [15/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.058 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.058 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.059 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.059 [19/265] Linking static target lib/librte_log.a 00:01:58.059 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.059 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.059 [22/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.059 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.059 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.059 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.059 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.059 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.059 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.059 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.059 [30/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.059 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.059 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.059 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.059 [34/265] Linking static target lib/librte_pci.a 00:01:58.059 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.059 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.059 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.323 [38/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.323 [39/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.323 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.323 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.323 [42/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.323 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.323 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.323 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.323 [46/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.323 [47/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.323 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.323 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.323 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.323 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.323 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.323 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.323 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.323 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.583 [56/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.583 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.583 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.583 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.583 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.583 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.583 [62/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.583 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.583 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.583 [65/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.583 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.583 [67/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.583 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.583 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.583 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.583 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.583 [72/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.583 [73/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.583 [74/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.583 [75/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.583 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.583 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.583 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.583 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.583 [80/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.583 [81/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.583 [82/265] Linking static target lib/librte_meter.a 00:01:58.583 [83/265] Linking static target lib/librte_ring.a 00:01:58.583 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.583 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.583 [86/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.583 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.583 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.583 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.583 [90/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.583 [91/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.583 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.583 [93/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.583 [94/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.583 [95/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.583 [96/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.583 [97/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.583 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.583 [99/265] Linking static target lib/librte_telemetry.a 00:01:58.583 [100/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.583 [101/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.583 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.583 [103/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.583 [104/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.583 [105/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.583 [106/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.583 [107/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.583 [108/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.583 [109/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.583 [110/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.583 [111/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.583 [112/265] Linking static target lib/librte_mempool.a 00:01:58.583 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.583 [114/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.583 [115/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.583 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.583 [117/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.583 [118/265] Linking static target lib/librte_cmdline.a 00:01:58.583 [119/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.583 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.583 [121/265] Linking static target lib/librte_net.a 00:01:58.583 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.583 [123/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.583 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.583 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.583 [126/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.583 [127/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.583 [128/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.583 [129/265] Linking static target lib/librte_rcu.a 00:01:58.583 [130/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.583 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.844 [132/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.844 [133/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.844 [134/265] Linking target lib/librte_log.so.24.0 00:01:58.844 [135/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.844 [136/265] Linking static target lib/librte_eal.a 00:01:58.844 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.844 [138/265] Linking static target lib/librte_timer.a 00:01:58.844 [139/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.844 [140/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.844 [141/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.844 [142/265] Linking static target lib/librte_compressdev.a 00:01:58.844 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.844 [144/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.844 [145/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.844 [146/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.844 [147/265] Linking static target lib/librte_mbuf.a 00:01:58.844 [148/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.844 [149/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.844 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.844 [151/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.844 [152/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.844 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.844 [154/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.844 [155/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.844 [156/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.844 [157/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.844 [158/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:58.844 [159/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.844 [160/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.844 [161/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.844 [162/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.844 [163/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.844 [164/265] Linking target lib/librte_kvargs.so.24.0 00:01:58.844 [165/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.105 [166/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:59.105 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.105 [168/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.105 [169/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.105 [170/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.105 [171/265] Linking static target lib/librte_hash.a 00:01:59.105 [172/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.105 [173/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.105 [174/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.105 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.105 [176/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.105 [177/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.105 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.105 [179/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.105 [180/265] Linking static target lib/librte_dmadev.a 00:01:59.105 [181/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.105 [182/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.105 [183/265] Linking target lib/librte_telemetry.so.24.0 00:01:59.105 [184/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:59.105 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.105 [186/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:59.105 [187/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.105 [188/265] Linking static target lib/librte_reorder.a 00:01:59.105 [189/265] Linking static target lib/librte_power.a 00:01:59.105 [190/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.105 [191/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.105 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.105 [193/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.105 [194/265] Linking static target lib/librte_security.a 00:01:59.105 [195/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:59.105 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.364 [197/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.364 [198/265] Linking static target lib/librte_cryptodev.a 00:01:59.364 [199/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.364 [200/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.364 [201/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.364 [202/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.364 [203/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.364 [204/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.364 [205/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.364 [206/265] Linking static target drivers/librte_mempool_ring.a 00:01:59.364 [207/265] Linking static target drivers/librte_bus_vdev.a 00:01:59.364 [208/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.364 [209/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.364 [210/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.364 [211/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.364 [212/265] Linking static target drivers/librte_bus_pci.a 00:01:59.623 [213/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.623 [214/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.623 [215/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.623 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.623 [217/265] Linking static target lib/librte_ethdev.a 00:01:59.623 [218/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.623 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.883 [220/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.883 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.883 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.883 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.143 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.084 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.084 [226/265] Linking static target lib/librte_vhost.a 00:02:01.084 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.998 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.285 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.225 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.225 [231/265] Linking target lib/librte_eal.so.24.0 00:02:09.225 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:09.225 [233/265] Linking target lib/librte_ring.so.24.0 00:02:09.225 [234/265] Linking target lib/librte_meter.so.24.0 00:02:09.225 [235/265] Linking target lib/librte_pci.so.24.0 00:02:09.225 [236/265] Linking target lib/librte_timer.so.24.0 00:02:09.225 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:09.225 [238/265] Linking target lib/librte_dmadev.so.24.0 00:02:09.484 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:09.484 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:09.484 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:09.484 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:09.484 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:09.484 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:09.484 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:09.484 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:09.484 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:09.484 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:09.484 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:09.484 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:09.743 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:09.743 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:09.743 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:09.743 [254/265] Linking target lib/librte_net.so.24.0 00:02:09.743 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:10.003 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:10.003 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:10.003 [258/265] Linking target lib/librte_cmdline.so.24.0 00:02:10.003 [259/265] Linking target lib/librte_hash.so.24.0 00:02:10.003 [260/265] Linking target lib/librte_ethdev.so.24.0 00:02:10.003 [261/265] Linking target lib/librte_security.so.24.0 00:02:10.003 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:10.003 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:10.263 [264/265] Linking target lib/librte_power.so.24.0 00:02:10.263 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:10.263 INFO: autodetecting backend as ninja 00:02:10.263 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:11.201 CC lib/log/log.o 00:02:11.201 CC lib/log/log_flags.o 00:02:11.201 CC lib/log/log_deprecated.o 00:02:11.201 CC lib/ut/ut.o 00:02:11.201 CC lib/ut_mock/mock.o 00:02:11.201 LIB libspdk_ut_mock.a 00:02:11.201 LIB libspdk_ut.a 00:02:11.201 LIB libspdk_log.a 00:02:11.201 SO libspdk_ut_mock.so.5.0 00:02:11.201 SO libspdk_ut.so.1.0 00:02:11.201 SO libspdk_log.so.6.1 00:02:11.201 SYMLINK libspdk_ut_mock.so 00:02:11.201 SYMLINK libspdk_ut.so 00:02:11.201 SYMLINK libspdk_log.so 00:02:11.460 CC lib/dma/dma.o 00:02:11.460 CXX lib/trace_parser/trace.o 00:02:11.460 CC lib/util/base64.o 00:02:11.460 CC lib/util/cpuset.o 00:02:11.460 CC lib/util/bit_array.o 00:02:11.460 CC lib/util/crc16.o 00:02:11.460 CC lib/util/crc32.o 00:02:11.460 CC lib/util/crc32_ieee.o 00:02:11.460 CC lib/util/crc32c.o 00:02:11.460 CC lib/ioat/ioat.o 00:02:11.460 CC lib/util/crc64.o 00:02:11.460 CC lib/util/dif.o 00:02:11.460 CC lib/util/fd.o 00:02:11.460 CC lib/util/file.o 00:02:11.460 CC lib/util/hexlify.o 00:02:11.460 CC lib/util/iov.o 00:02:11.460 CC lib/util/math.o 00:02:11.460 CC lib/util/pipe.o 00:02:11.460 CC lib/util/strerror_tls.o 00:02:11.460 CC lib/util/string.o 00:02:11.460 CC lib/util/uuid.o 00:02:11.460 CC lib/util/fd_group.o 00:02:11.460 CC lib/util/xor.o 00:02:11.460 CC lib/util/zipf.o 00:02:11.720 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.720 CC lib/vfio_user/host/vfio_user.o 00:02:11.720 LIB libspdk_dma.a 00:02:11.720 SO libspdk_dma.so.3.0 00:02:11.720 SYMLINK libspdk_dma.so 00:02:11.720 LIB libspdk_ioat.a 00:02:11.720 SO libspdk_ioat.so.6.0 00:02:11.980 LIB libspdk_vfio_user.a 00:02:11.980 SYMLINK libspdk_ioat.so 00:02:11.981 SO libspdk_vfio_user.so.4.0 00:02:11.981 SYMLINK libspdk_vfio_user.so 00:02:11.981 LIB libspdk_util.a 00:02:11.981 SO libspdk_util.so.8.0 00:02:12.241 SYMLINK libspdk_util.so 00:02:12.241 CC lib/rdma/common.o 00:02:12.241 CC lib/rdma/rdma_verbs.o 00:02:12.241 CC lib/json/json_util.o 00:02:12.241 CC lib/json/json_parse.o 00:02:12.241 CC lib/json/json_write.o 00:02:12.241 CC lib/idxd/idxd.o 00:02:12.241 CC lib/conf/conf.o 00:02:12.241 CC lib/idxd/idxd_user.o 00:02:12.241 CC lib/env_dpdk/env.o 00:02:12.241 CC lib/idxd/idxd_kernel.o 00:02:12.241 CC lib/env_dpdk/memory.o 00:02:12.241 CC lib/env_dpdk/pci.o 00:02:12.241 CC lib/vmd/vmd.o 00:02:12.241 CC lib/env_dpdk/init.o 00:02:12.241 CC lib/vmd/led.o 00:02:12.241 CC lib/env_dpdk/threads.o 00:02:12.241 CC lib/env_dpdk/pci_ioat.o 00:02:12.241 CC lib/env_dpdk/pci_virtio.o 00:02:12.241 CC lib/env_dpdk/pci_vmd.o 00:02:12.241 CC lib/env_dpdk/pci_event.o 00:02:12.241 CC lib/env_dpdk/pci_idxd.o 00:02:12.241 CC lib/env_dpdk/sigbus_handler.o 00:02:12.241 CC lib/env_dpdk/pci_dpdk.o 00:02:12.241 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:12.241 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:12.501 LIB libspdk_conf.a 00:02:12.501 LIB libspdk_json.a 00:02:12.501 LIB libspdk_rdma.a 00:02:12.501 SO libspdk_conf.so.5.0 00:02:12.501 SO libspdk_json.so.5.1 00:02:12.501 SO libspdk_rdma.so.5.0 00:02:12.759 SYMLINK libspdk_conf.so 00:02:12.759 SYMLINK libspdk_json.so 00:02:12.759 SYMLINK libspdk_rdma.so 00:02:12.759 LIB libspdk_idxd.a 00:02:12.759 SO libspdk_idxd.so.11.0 00:02:12.759 LIB libspdk_vmd.a 00:02:12.759 SO libspdk_vmd.so.5.0 00:02:12.759 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.759 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.759 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.759 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.759 SYMLINK libspdk_idxd.so 00:02:13.019 SYMLINK libspdk_vmd.so 00:02:13.019 LIB libspdk_jsonrpc.a 00:02:13.019 LIB libspdk_trace_parser.a 00:02:13.019 SO libspdk_trace_parser.so.4.0 00:02:13.019 SO libspdk_jsonrpc.so.5.1 00:02:13.279 SYMLINK libspdk_jsonrpc.so 00:02:13.279 SYMLINK libspdk_trace_parser.so 00:02:13.279 LIB libspdk_env_dpdk.a 00:02:13.279 CC lib/rpc/rpc.o 00:02:13.538 SO libspdk_env_dpdk.so.13.0 00:02:13.538 SYMLINK libspdk_env_dpdk.so 00:02:13.538 LIB libspdk_rpc.a 00:02:13.538 SO libspdk_rpc.so.5.0 00:02:13.538 SYMLINK libspdk_rpc.so 00:02:13.798 CC lib/notify/notify.o 00:02:13.798 CC lib/notify/notify_rpc.o 00:02:13.798 CC lib/trace/trace.o 00:02:13.798 CC lib/trace/trace_flags.o 00:02:13.798 CC lib/trace/trace_rpc.o 00:02:13.798 CC lib/sock/sock.o 00:02:13.798 CC lib/sock/sock_rpc.o 00:02:14.058 LIB libspdk_notify.a 00:02:14.058 SO libspdk_notify.so.5.0 00:02:14.058 LIB libspdk_trace.a 00:02:14.058 SYMLINK libspdk_notify.so 00:02:14.058 SO libspdk_trace.so.9.0 00:02:14.058 SYMLINK libspdk_trace.so 00:02:14.058 LIB libspdk_sock.a 00:02:14.319 SO libspdk_sock.so.8.0 00:02:14.319 SYMLINK libspdk_sock.so 00:02:14.319 CC lib/thread/thread.o 00:02:14.319 CC lib/thread/iobuf.o 00:02:14.579 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.579 CC lib/nvme/nvme_ctrlr.o 00:02:14.579 CC lib/nvme/nvme_fabric.o 00:02:14.579 CC lib/nvme/nvme_ns_cmd.o 00:02:14.579 CC lib/nvme/nvme_ns.o 00:02:14.579 CC lib/nvme/nvme_pcie_common.o 00:02:14.579 CC lib/nvme/nvme_pcie.o 00:02:14.579 CC lib/nvme/nvme_qpair.o 00:02:14.579 CC lib/nvme/nvme.o 00:02:14.579 CC lib/nvme/nvme_quirks.o 00:02:14.579 CC lib/nvme/nvme_transport.o 00:02:14.579 CC lib/nvme/nvme_discovery.o 00:02:14.579 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.579 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.579 CC lib/nvme/nvme_tcp.o 00:02:14.579 CC lib/nvme/nvme_opal.o 00:02:14.579 CC lib/nvme/nvme_io_msg.o 00:02:14.579 CC lib/nvme/nvme_poll_group.o 00:02:14.579 CC lib/nvme/nvme_zns.o 00:02:14.579 CC lib/nvme/nvme_cuse.o 00:02:14.579 CC lib/nvme/nvme_vfio_user.o 00:02:14.579 CC lib/nvme/nvme_rdma.o 00:02:15.520 LIB libspdk_thread.a 00:02:15.520 SO libspdk_thread.so.9.0 00:02:15.520 SYMLINK libspdk_thread.so 00:02:15.780 CC lib/accel/accel.o 00:02:15.780 CC lib/accel/accel_sw.o 00:02:15.780 CC lib/accel/accel_rpc.o 00:02:15.780 CC lib/init/json_config.o 00:02:15.780 CC lib/init/subsystem.o 00:02:15.780 CC lib/init/subsystem_rpc.o 00:02:15.780 CC lib/init/rpc.o 00:02:15.780 CC lib/blob/blobstore.o 00:02:15.780 CC lib/virtio/virtio.o 00:02:15.780 CC lib/blob/request.o 00:02:15.780 CC lib/virtio/virtio_vhost_user.o 00:02:15.780 CC lib/blob/zeroes.o 00:02:15.780 CC lib/virtio/virtio_vfio_user.o 00:02:15.780 CC lib/blob/blob_bs_dev.o 00:02:15.780 CC lib/virtio/virtio_pci.o 00:02:16.039 LIB libspdk_init.a 00:02:16.039 LIB libspdk_nvme.a 00:02:16.039 SO libspdk_init.so.4.0 00:02:16.039 LIB libspdk_virtio.a 00:02:16.039 SYMLINK libspdk_init.so 00:02:16.039 SO libspdk_nvme.so.12.0 00:02:16.039 SO libspdk_virtio.so.6.0 00:02:16.299 SYMLINK libspdk_virtio.so 00:02:16.299 SYMLINK libspdk_nvme.so 00:02:16.299 CC lib/event/app.o 00:02:16.299 CC lib/event/reactor.o 00:02:16.299 CC lib/event/log_rpc.o 00:02:16.299 CC lib/event/app_rpc.o 00:02:16.299 CC lib/event/scheduler_static.o 00:02:16.559 LIB libspdk_accel.a 00:02:16.559 SO libspdk_accel.so.14.0 00:02:16.559 LIB libspdk_event.a 00:02:16.559 SYMLINK libspdk_accel.so 00:02:16.559 SO libspdk_event.so.12.0 00:02:16.559 SYMLINK libspdk_event.so 00:02:16.820 CC lib/bdev/bdev.o 00:02:16.820 CC lib/bdev/bdev_rpc.o 00:02:16.820 CC lib/bdev/bdev_zone.o 00:02:16.820 CC lib/bdev/part.o 00:02:16.820 CC lib/bdev/scsi_nvme.o 00:02:17.760 LIB libspdk_blob.a 00:02:17.760 SO libspdk_blob.so.10.1 00:02:17.760 SYMLINK libspdk_blob.so 00:02:18.019 CC lib/lvol/lvol.o 00:02:18.019 CC lib/blobfs/blobfs.o 00:02:18.019 CC lib/blobfs/tree.o 00:02:18.588 LIB libspdk_bdev.a 00:02:18.588 SO libspdk_bdev.so.14.0 00:02:18.588 LIB libspdk_blobfs.a 00:02:18.588 LIB libspdk_lvol.a 00:02:18.588 SO libspdk_blobfs.so.9.0 00:02:18.588 SO libspdk_lvol.so.9.1 00:02:18.588 SYMLINK libspdk_bdev.so 00:02:18.588 SYMLINK libspdk_blobfs.so 00:02:18.588 SYMLINK libspdk_lvol.so 00:02:18.848 CC lib/nvmf/ctrlr.o 00:02:18.848 CC lib/nvmf/ctrlr_discovery.o 00:02:18.848 CC lib/nvmf/ctrlr_bdev.o 00:02:18.848 CC lib/nvmf/subsystem.o 00:02:18.848 CC lib/scsi/dev.o 00:02:18.848 CC lib/scsi/lun.o 00:02:18.848 CC lib/ftl/ftl_core.o 00:02:18.848 CC lib/scsi/scsi.o 00:02:18.848 CC lib/nvmf/nvmf.o 00:02:18.848 CC lib/scsi/port.o 00:02:18.848 CC lib/ftl/ftl_init.o 00:02:18.848 CC lib/nvmf/nvmf_rpc.o 00:02:18.848 CC lib/nvmf/transport.o 00:02:18.848 CC lib/ftl/ftl_layout.o 00:02:18.848 CC lib/nvmf/tcp.o 00:02:18.848 CC lib/ftl/ftl_debug.o 00:02:18.848 CC lib/scsi/scsi_pr.o 00:02:18.848 CC lib/scsi/scsi_rpc.o 00:02:18.848 CC lib/ftl/ftl_io.o 00:02:18.848 CC lib/nvmf/rdma.o 00:02:18.848 CC lib/scsi/scsi_bdev.o 00:02:18.848 CC lib/ftl/ftl_l2p.o 00:02:18.848 CC lib/scsi/task.o 00:02:18.848 CC lib/ftl/ftl_sb.o 00:02:18.848 CC lib/ftl/ftl_l2p_flat.o 00:02:18.848 CC lib/ftl/ftl_nv_cache.o 00:02:18.848 CC lib/ftl/ftl_band.o 00:02:18.848 CC lib/ftl/ftl_band_ops.o 00:02:18.848 CC lib/ftl/ftl_writer.o 00:02:18.848 CC lib/ublk/ublk.o 00:02:18.848 CC lib/ftl/ftl_reloc.o 00:02:18.848 CC lib/ublk/ublk_rpc.o 00:02:18.848 CC lib/ftl/ftl_rq.o 00:02:18.848 CC lib/nbd/nbd.o 00:02:18.848 CC lib/nbd/nbd_rpc.o 00:02:18.848 CC lib/ftl/ftl_p2l.o 00:02:18.848 CC lib/ftl/ftl_l2p_cache.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:18.848 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:18.848 CC lib/ftl/utils/ftl_conf.o 00:02:18.848 CC lib/ftl/utils/ftl_md.o 00:02:18.848 CC lib/ftl/utils/ftl_mempool.o 00:02:18.848 CC lib/ftl/utils/ftl_bitmap.o 00:02:18.848 CC lib/ftl/utils/ftl_property.o 00:02:18.848 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:18.848 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:18.848 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:18.848 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:18.848 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:18.848 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:18.848 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:18.848 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:18.848 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:18.848 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:18.848 CC lib/ftl/base/ftl_base_bdev.o 00:02:18.848 CC lib/ftl/base/ftl_base_dev.o 00:02:18.848 CC lib/ftl/ftl_trace.o 00:02:19.416 LIB libspdk_nbd.a 00:02:19.416 SO libspdk_nbd.so.6.0 00:02:19.416 LIB libspdk_scsi.a 00:02:19.416 SYMLINK libspdk_nbd.so 00:02:19.416 SO libspdk_scsi.so.8.0 00:02:19.416 SYMLINK libspdk_scsi.so 00:02:19.675 LIB libspdk_ublk.a 00:02:19.675 SO libspdk_ublk.so.2.0 00:02:19.675 SYMLINK libspdk_ublk.so 00:02:19.675 CC lib/vhost/vhost_rpc.o 00:02:19.675 CC lib/vhost/vhost_scsi.o 00:02:19.675 CC lib/vhost/vhost.o 00:02:19.675 CC lib/vhost/vhost_blk.o 00:02:19.675 CC lib/vhost/rte_vhost_user.o 00:02:19.675 CC lib/iscsi/conn.o 00:02:19.675 CC lib/iscsi/init_grp.o 00:02:19.675 CC lib/iscsi/iscsi.o 00:02:19.675 CC lib/iscsi/md5.o 00:02:19.675 CC lib/iscsi/param.o 00:02:19.675 CC lib/iscsi/iscsi_subsystem.o 00:02:19.675 CC lib/iscsi/portal_grp.o 00:02:19.675 CC lib/iscsi/tgt_node.o 00:02:19.675 CC lib/iscsi/iscsi_rpc.o 00:02:19.675 CC lib/iscsi/task.o 00:02:19.675 LIB libspdk_ftl.a 00:02:19.935 SO libspdk_ftl.so.8.0 00:02:20.195 SYMLINK libspdk_ftl.so 00:02:20.455 LIB libspdk_vhost.a 00:02:20.455 SO libspdk_vhost.so.7.1 00:02:20.455 LIB libspdk_nvmf.a 00:02:20.715 SO libspdk_nvmf.so.17.0 00:02:20.715 SYMLINK libspdk_vhost.so 00:02:20.715 LIB libspdk_iscsi.a 00:02:20.715 SYMLINK libspdk_nvmf.so 00:02:20.715 SO libspdk_iscsi.so.7.0 00:02:20.975 SYMLINK libspdk_iscsi.so 00:02:21.236 CC module/env_dpdk/env_dpdk_rpc.o 00:02:21.236 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:21.236 CC module/accel/iaa/accel_iaa.o 00:02:21.236 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:21.236 CC module/accel/iaa/accel_iaa_rpc.o 00:02:21.236 CC module/sock/posix/posix.o 00:02:21.236 CC module/accel/error/accel_error.o 00:02:21.236 CC module/accel/error/accel_error_rpc.o 00:02:21.236 CC module/blob/bdev/blob_bdev.o 00:02:21.236 CC module/accel/dsa/accel_dsa.o 00:02:21.236 CC module/scheduler/gscheduler/gscheduler.o 00:02:21.236 CC module/accel/dsa/accel_dsa_rpc.o 00:02:21.236 CC module/accel/ioat/accel_ioat.o 00:02:21.236 CC module/accel/ioat/accel_ioat_rpc.o 00:02:21.236 LIB libspdk_env_dpdk_rpc.a 00:02:21.236 SO libspdk_env_dpdk_rpc.so.5.0 00:02:21.496 SYMLINK libspdk_env_dpdk_rpc.so 00:02:21.496 LIB libspdk_scheduler_gscheduler.a 00:02:21.496 LIB libspdk_scheduler_dpdk_governor.a 00:02:21.496 LIB libspdk_accel_error.a 00:02:21.496 SO libspdk_scheduler_gscheduler.so.3.0 00:02:21.496 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:21.496 LIB libspdk_accel_ioat.a 00:02:21.496 SO libspdk_accel_error.so.1.0 00:02:21.496 LIB libspdk_scheduler_dynamic.a 00:02:21.496 LIB libspdk_accel_iaa.a 00:02:21.496 SO libspdk_accel_ioat.so.5.0 00:02:21.496 SYMLINK libspdk_scheduler_gscheduler.so 00:02:21.496 SO libspdk_scheduler_dynamic.so.3.0 00:02:21.496 LIB libspdk_accel_dsa.a 00:02:21.496 SO libspdk_accel_iaa.so.2.0 00:02:21.496 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:21.496 LIB libspdk_blob_bdev.a 00:02:21.496 SYMLINK libspdk_accel_error.so 00:02:21.496 SO libspdk_accel_dsa.so.4.0 00:02:21.496 SYMLINK libspdk_scheduler_dynamic.so 00:02:21.496 SYMLINK libspdk_accel_ioat.so 00:02:21.496 SO libspdk_blob_bdev.so.10.1 00:02:21.496 SYMLINK libspdk_accel_iaa.so 00:02:21.496 SYMLINK libspdk_accel_dsa.so 00:02:21.755 SYMLINK libspdk_blob_bdev.so 00:02:21.755 LIB libspdk_sock_posix.a 00:02:22.014 SO libspdk_sock_posix.so.5.0 00:02:22.014 CC module/bdev/delay/vbdev_delay.o 00:02:22.014 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:22.014 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.014 CC module/bdev/nvme/bdev_nvme.o 00:02:22.014 CC module/bdev/malloc/bdev_malloc.o 00:02:22.014 CC module/bdev/nvme/nvme_rpc.o 00:02:22.014 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.014 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:22.014 CC module/bdev/aio/bdev_aio_rpc.o 00:02:22.014 CC module/bdev/aio/bdev_aio.o 00:02:22.014 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:22.014 CC module/bdev/nvme/vbdev_opal.o 00:02:22.014 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:22.014 CC module/bdev/error/vbdev_error.o 00:02:22.014 CC module/bdev/iscsi/bdev_iscsi.o 00:02:22.014 CC module/bdev/error/vbdev_error_rpc.o 00:02:22.014 CC module/bdev/gpt/gpt.o 00:02:22.014 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:22.014 CC module/bdev/passthru/vbdev_passthru.o 00:02:22.014 CC module/bdev/gpt/vbdev_gpt.o 00:02:22.014 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:22.014 CC module/bdev/split/vbdev_split.o 00:02:22.014 CC module/bdev/lvol/vbdev_lvol.o 00:02:22.014 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:22.014 CC module/bdev/split/vbdev_split_rpc.o 00:02:22.014 CC module/bdev/null/bdev_null.o 00:02:22.014 CC module/bdev/ftl/bdev_ftl.o 00:02:22.014 CC module/bdev/null/bdev_null_rpc.o 00:02:22.014 CC module/blobfs/bdev/blobfs_bdev.o 00:02:22.014 CC module/bdev/raid/bdev_raid.o 00:02:22.014 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:22.014 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:22.014 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.014 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.014 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:22.014 CC module/bdev/raid/raid0.o 00:02:22.014 CC module/bdev/raid/raid1.o 00:02:22.014 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:22.014 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:22.014 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:22.014 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:22.014 CC module/bdev/raid/concat.o 00:02:22.014 SYMLINK libspdk_sock_posix.so 00:02:22.273 LIB libspdk_blobfs_bdev.a 00:02:22.273 SO libspdk_blobfs_bdev.so.5.0 00:02:22.273 LIB libspdk_bdev_split.a 00:02:22.273 SO libspdk_bdev_split.so.5.0 00:02:22.273 LIB libspdk_bdev_gpt.a 00:02:22.273 LIB libspdk_bdev_null.a 00:02:22.273 SYMLINK libspdk_blobfs_bdev.so 00:02:22.273 LIB libspdk_bdev_ftl.a 00:02:22.273 LIB libspdk_bdev_error.a 00:02:22.273 SO libspdk_bdev_null.so.5.0 00:02:22.273 SO libspdk_bdev_gpt.so.5.0 00:02:22.273 LIB libspdk_bdev_aio.a 00:02:22.273 LIB libspdk_bdev_passthru.a 00:02:22.273 LIB libspdk_bdev_delay.a 00:02:22.273 LIB libspdk_bdev_malloc.a 00:02:22.274 SO libspdk_bdev_error.so.5.0 00:02:22.274 SYMLINK libspdk_bdev_split.so 00:02:22.274 SO libspdk_bdev_ftl.so.5.0 00:02:22.274 SO libspdk_bdev_aio.so.5.0 00:02:22.274 SO libspdk_bdev_passthru.so.5.0 00:02:22.274 LIB libspdk_bdev_zone_block.a 00:02:22.274 SO libspdk_bdev_malloc.so.5.0 00:02:22.274 SO libspdk_bdev_delay.so.5.0 00:02:22.274 SYMLINK libspdk_bdev_null.so 00:02:22.274 SYMLINK libspdk_bdev_gpt.so 00:02:22.274 LIB libspdk_bdev_iscsi.a 00:02:22.274 SYMLINK libspdk_bdev_error.so 00:02:22.274 SYMLINK libspdk_bdev_ftl.so 00:02:22.274 SO libspdk_bdev_zone_block.so.5.0 00:02:22.274 SYMLINK libspdk_bdev_aio.so 00:02:22.274 SO libspdk_bdev_iscsi.so.5.0 00:02:22.274 SYMLINK libspdk_bdev_delay.so 00:02:22.274 SYMLINK libspdk_bdev_passthru.so 00:02:22.274 SYMLINK libspdk_bdev_malloc.so 00:02:22.533 SYMLINK libspdk_bdev_zone_block.so 00:02:22.533 SYMLINK libspdk_bdev_iscsi.so 00:02:22.533 LIB libspdk_bdev_lvol.a 00:02:22.533 LIB libspdk_bdev_virtio.a 00:02:22.533 SO libspdk_bdev_lvol.so.5.0 00:02:22.533 SO libspdk_bdev_virtio.so.5.0 00:02:22.533 SYMLINK libspdk_bdev_lvol.so 00:02:22.533 SYMLINK libspdk_bdev_virtio.so 00:02:22.793 LIB libspdk_bdev_raid.a 00:02:22.793 SO libspdk_bdev_raid.so.5.0 00:02:22.793 SYMLINK libspdk_bdev_raid.so 00:02:23.734 LIB libspdk_bdev_nvme.a 00:02:23.734 SO libspdk_bdev_nvme.so.6.0 00:02:23.734 SYMLINK libspdk_bdev_nvme.so 00:02:23.994 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.994 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.994 CC module/event/subsystems/vmd/vmd.o 00:02:23.994 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.994 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.994 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:23.994 CC module/event/subsystems/sock/sock.o 00:02:24.254 LIB libspdk_event_sock.a 00:02:24.254 LIB libspdk_event_vhost_blk.a 00:02:24.254 LIB libspdk_event_vmd.a 00:02:24.254 LIB libspdk_event_iobuf.a 00:02:24.254 LIB libspdk_event_scheduler.a 00:02:24.254 SO libspdk_event_sock.so.4.0 00:02:24.254 SO libspdk_event_vmd.so.5.0 00:02:24.254 SO libspdk_event_vhost_blk.so.2.0 00:02:24.254 SO libspdk_event_scheduler.so.3.0 00:02:24.254 SO libspdk_event_iobuf.so.2.0 00:02:24.254 SYMLINK libspdk_event_sock.so 00:02:24.254 SYMLINK libspdk_event_vhost_blk.so 00:02:24.254 SYMLINK libspdk_event_vmd.so 00:02:24.254 SYMLINK libspdk_event_scheduler.so 00:02:24.254 SYMLINK libspdk_event_iobuf.so 00:02:24.513 CC module/event/subsystems/accel/accel.o 00:02:24.513 LIB libspdk_event_accel.a 00:02:24.774 SO libspdk_event_accel.so.5.0 00:02:24.774 SYMLINK libspdk_event_accel.so 00:02:25.046 CC module/event/subsystems/bdev/bdev.o 00:02:25.046 LIB libspdk_event_bdev.a 00:02:25.046 SO libspdk_event_bdev.so.5.0 00:02:25.046 SYMLINK libspdk_event_bdev.so 00:02:25.312 CC module/event/subsystems/scsi/scsi.o 00:02:25.312 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.312 CC module/event/subsystems/nbd/nbd.o 00:02:25.312 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.312 CC module/event/subsystems/ublk/ublk.o 00:02:25.578 LIB libspdk_event_ublk.a 00:02:25.578 LIB libspdk_event_nbd.a 00:02:25.578 LIB libspdk_event_scsi.a 00:02:25.578 SO libspdk_event_ublk.so.2.0 00:02:25.578 SO libspdk_event_nbd.so.5.0 00:02:25.578 SO libspdk_event_scsi.so.5.0 00:02:25.578 LIB libspdk_event_nvmf.a 00:02:25.578 SYMLINK libspdk_event_ublk.so 00:02:25.578 SO libspdk_event_nvmf.so.5.0 00:02:25.578 SYMLINK libspdk_event_nbd.so 00:02:25.578 SYMLINK libspdk_event_scsi.so 00:02:25.578 SYMLINK libspdk_event_nvmf.so 00:02:25.853 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:25.853 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.853 LIB libspdk_event_vhost_scsi.a 00:02:25.853 LIB libspdk_event_iscsi.a 00:02:25.853 SO libspdk_event_vhost_scsi.so.2.0 00:02:26.137 SO libspdk_event_iscsi.so.5.0 00:02:26.137 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.137 SYMLINK libspdk_event_iscsi.so 00:02:26.137 SO libspdk.so.5.0 00:02:26.137 SYMLINK libspdk.so 00:02:26.448 CXX app/trace/trace.o 00:02:26.448 CC app/spdk_lspci/spdk_lspci.o 00:02:26.448 CC app/spdk_nvme_identify/identify.o 00:02:26.448 CC app/trace_record/trace_record.o 00:02:26.448 CC test/rpc_client/rpc_client_test.o 00:02:26.448 CC app/spdk_nvme_discover/discovery_aer.o 00:02:26.448 TEST_HEADER include/spdk/accel_module.h 00:02:26.448 TEST_HEADER include/spdk/assert.h 00:02:26.448 CC app/spdk_nvme_perf/perf.o 00:02:26.448 TEST_HEADER include/spdk/accel.h 00:02:26.448 TEST_HEADER include/spdk/barrier.h 00:02:26.448 TEST_HEADER include/spdk/base64.h 00:02:26.448 TEST_HEADER include/spdk/bdev.h 00:02:26.448 CC app/spdk_top/spdk_top.o 00:02:26.448 TEST_HEADER include/spdk/bdev_module.h 00:02:26.448 TEST_HEADER include/spdk/bdev_zone.h 00:02:26.448 TEST_HEADER include/spdk/bit_array.h 00:02:26.448 TEST_HEADER include/spdk/bit_pool.h 00:02:26.448 TEST_HEADER include/spdk/blob_bdev.h 00:02:26.448 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:26.448 TEST_HEADER include/spdk/blobfs.h 00:02:26.448 TEST_HEADER include/spdk/conf.h 00:02:26.448 TEST_HEADER include/spdk/blob.h 00:02:26.448 TEST_HEADER include/spdk/config.h 00:02:26.448 TEST_HEADER include/spdk/cpuset.h 00:02:26.448 TEST_HEADER include/spdk/crc16.h 00:02:26.448 TEST_HEADER include/spdk/crc32.h 00:02:26.448 TEST_HEADER include/spdk/crc64.h 00:02:26.448 TEST_HEADER include/spdk/dif.h 00:02:26.448 TEST_HEADER include/spdk/dma.h 00:02:26.448 TEST_HEADER include/spdk/endian.h 00:02:26.448 TEST_HEADER include/spdk/env.h 00:02:26.448 TEST_HEADER include/spdk/env_dpdk.h 00:02:26.448 TEST_HEADER include/spdk/event.h 00:02:26.448 TEST_HEADER include/spdk/fd_group.h 00:02:26.448 TEST_HEADER include/spdk/fd.h 00:02:26.448 TEST_HEADER include/spdk/ftl.h 00:02:26.448 TEST_HEADER include/spdk/file.h 00:02:26.448 TEST_HEADER include/spdk/gpt_spec.h 00:02:26.448 TEST_HEADER include/spdk/hexlify.h 00:02:26.448 TEST_HEADER include/spdk/idxd.h 00:02:26.448 TEST_HEADER include/spdk/histogram_data.h 00:02:26.448 TEST_HEADER include/spdk/idxd_spec.h 00:02:26.448 TEST_HEADER include/spdk/init.h 00:02:26.448 TEST_HEADER include/spdk/ioat.h 00:02:26.448 TEST_HEADER include/spdk/ioat_spec.h 00:02:26.448 TEST_HEADER include/spdk/json.h 00:02:26.448 TEST_HEADER include/spdk/jsonrpc.h 00:02:26.448 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:26.448 TEST_HEADER include/spdk/iscsi_spec.h 00:02:26.448 TEST_HEADER include/spdk/likely.h 00:02:26.448 TEST_HEADER include/spdk/lvol.h 00:02:26.448 CC app/nvmf_tgt/nvmf_main.o 00:02:26.448 TEST_HEADER include/spdk/memory.h 00:02:26.448 TEST_HEADER include/spdk/log.h 00:02:26.448 TEST_HEADER include/spdk/mmio.h 00:02:26.448 CC app/iscsi_tgt/iscsi_tgt.o 00:02:26.448 TEST_HEADER include/spdk/nbd.h 00:02:26.448 TEST_HEADER include/spdk/notify.h 00:02:26.448 CC app/spdk_dd/spdk_dd.o 00:02:26.448 TEST_HEADER include/spdk/nvme.h 00:02:26.448 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:26.448 TEST_HEADER include/spdk/nvme_intel.h 00:02:26.448 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:26.448 TEST_HEADER include/spdk/nvme_zns.h 00:02:26.448 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:26.448 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:26.448 TEST_HEADER include/spdk/nvme_spec.h 00:02:26.448 TEST_HEADER include/spdk/nvmf_spec.h 00:02:26.448 TEST_HEADER include/spdk/nvmf_transport.h 00:02:26.448 CC app/vhost/vhost.o 00:02:26.448 TEST_HEADER include/spdk/nvmf.h 00:02:26.448 TEST_HEADER include/spdk/opal.h 00:02:26.448 TEST_HEADER include/spdk/pipe.h 00:02:26.448 TEST_HEADER include/spdk/opal_spec.h 00:02:26.448 TEST_HEADER include/spdk/pci_ids.h 00:02:26.448 TEST_HEADER include/spdk/queue.h 00:02:26.448 TEST_HEADER include/spdk/reduce.h 00:02:26.448 TEST_HEADER include/spdk/rpc.h 00:02:26.448 TEST_HEADER include/spdk/scsi.h 00:02:26.448 TEST_HEADER include/spdk/scsi_spec.h 00:02:26.448 TEST_HEADER include/spdk/scheduler.h 00:02:26.448 TEST_HEADER include/spdk/sock.h 00:02:26.448 TEST_HEADER include/spdk/string.h 00:02:26.448 TEST_HEADER include/spdk/stdinc.h 00:02:26.448 TEST_HEADER include/spdk/thread.h 00:02:26.448 TEST_HEADER include/spdk/trace.h 00:02:26.448 TEST_HEADER include/spdk/tree.h 00:02:26.448 TEST_HEADER include/spdk/trace_parser.h 00:02:26.448 TEST_HEADER include/spdk/ublk.h 00:02:26.448 TEST_HEADER include/spdk/util.h 00:02:26.448 TEST_HEADER include/spdk/uuid.h 00:02:26.448 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:26.448 CC app/spdk_tgt/spdk_tgt.o 00:02:26.448 TEST_HEADER include/spdk/version.h 00:02:26.448 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:26.448 TEST_HEADER include/spdk/vmd.h 00:02:26.448 TEST_HEADER include/spdk/vhost.h 00:02:26.448 TEST_HEADER include/spdk/xor.h 00:02:26.448 TEST_HEADER include/spdk/zipf.h 00:02:26.448 CXX test/cpp_headers/accel_module.o 00:02:26.448 CXX test/cpp_headers/accel.o 00:02:26.448 CXX test/cpp_headers/barrier.o 00:02:26.448 CXX test/cpp_headers/assert.o 00:02:26.448 CXX test/cpp_headers/base64.o 00:02:26.448 CXX test/cpp_headers/bdev.o 00:02:26.448 CXX test/cpp_headers/bdev_module.o 00:02:26.448 CXX test/cpp_headers/bdev_zone.o 00:02:26.448 CXX test/cpp_headers/bit_array.o 00:02:26.448 CXX test/cpp_headers/bit_pool.o 00:02:26.448 CXX test/cpp_headers/blob_bdev.o 00:02:26.448 CXX test/cpp_headers/blobfs_bdev.o 00:02:26.448 CXX test/cpp_headers/blobfs.o 00:02:26.448 CXX test/cpp_headers/config.o 00:02:26.448 CXX test/cpp_headers/blob.o 00:02:26.448 CXX test/cpp_headers/conf.o 00:02:26.448 CXX test/cpp_headers/crc16.o 00:02:26.448 CXX test/cpp_headers/crc32.o 00:02:26.448 CXX test/cpp_headers/cpuset.o 00:02:26.448 CXX test/cpp_headers/crc64.o 00:02:26.448 CXX test/cpp_headers/dif.o 00:02:26.448 CC examples/nvme/reconnect/reconnect.o 00:02:26.448 CC examples/ioat/verify/verify.o 00:02:26.448 CC app/fio/nvme/fio_plugin.o 00:02:26.448 CC examples/ioat/perf/perf.o 00:02:26.448 CC test/app/histogram_perf/histogram_perf.o 00:02:26.448 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:26.448 CC examples/sock/hello_world/hello_sock.o 00:02:26.448 CC test/app/jsoncat/jsoncat.o 00:02:26.448 CC examples/accel/perf/accel_perf.o 00:02:26.448 CC test/app/stub/stub.o 00:02:26.448 CC test/thread/poller_perf/poller_perf.o 00:02:26.448 CC examples/nvme/arbitration/arbitration.o 00:02:26.448 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:26.448 CC test/event/reactor/reactor.o 00:02:26.736 CC examples/nvme/abort/abort.o 00:02:26.736 CC test/event/reactor_perf/reactor_perf.o 00:02:26.736 CC examples/vmd/led/led.o 00:02:26.736 CC examples/vmd/lsvmd/lsvmd.o 00:02:26.736 CC examples/nvme/hello_world/hello_world.o 00:02:26.736 CC test/nvme/reset/reset.o 00:02:26.736 CXX test/cpp_headers/dma.o 00:02:26.736 CC examples/util/zipf/zipf.o 00:02:26.736 CC examples/nvme/hotplug/hotplug.o 00:02:26.736 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:26.736 CC test/event/event_perf/event_perf.o 00:02:26.736 CC test/event/app_repeat/app_repeat.o 00:02:26.736 CC examples/bdev/hello_world/hello_bdev.o 00:02:26.736 CC test/env/pci/pci_ut.o 00:02:26.736 CC test/nvme/boot_partition/boot_partition.o 00:02:26.736 CC test/nvme/aer/aer.o 00:02:26.736 CC examples/nvmf/nvmf/nvmf.o 00:02:26.736 CC test/nvme/simple_copy/simple_copy.o 00:02:26.736 CC test/env/memory/memory_ut.o 00:02:26.736 CC test/nvme/overhead/overhead.o 00:02:26.736 CC examples/bdev/bdevperf/bdevperf.o 00:02:26.736 CC test/bdev/bdevio/bdevio.o 00:02:26.736 CC test/nvme/cuse/cuse.o 00:02:26.736 CC test/app/bdev_svc/bdev_svc.o 00:02:26.736 CC test/nvme/err_injection/err_injection.o 00:02:26.736 CC test/nvme/fdp/fdp.o 00:02:26.736 CC examples/thread/thread/thread_ex.o 00:02:26.736 CC examples/idxd/perf/perf.o 00:02:26.736 CC test/env/vtophys/vtophys.o 00:02:26.736 CC test/nvme/connect_stress/connect_stress.o 00:02:26.736 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:26.736 CC test/dma/test_dma/test_dma.o 00:02:26.736 CC test/blobfs/mkfs/mkfs.o 00:02:26.736 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.736 CC test/accel/dif/dif.o 00:02:26.736 CC test/nvme/sgl/sgl.o 00:02:26.736 CC test/nvme/reserve/reserve.o 00:02:26.736 CC examples/blob/hello_world/hello_blob.o 00:02:26.736 CC test/nvme/e2edp/nvme_dp.o 00:02:26.736 CC app/fio/bdev/fio_plugin.o 00:02:26.736 CC test/event/scheduler/scheduler.o 00:02:26.736 CC test/nvme/startup/startup.o 00:02:26.736 CC examples/blob/cli/blobcli.o 00:02:26.736 CC test/nvme/compliance/nvme_compliance.o 00:02:26.736 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.736 LINK spdk_lspci 00:02:26.736 CC test/lvol/esnap/esnap.o 00:02:26.736 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:26.736 CC test/env/mem_callbacks/mem_callbacks.o 00:02:26.736 LINK rpc_client_test 00:02:26.736 LINK spdk_nvme_discover 00:02:26.736 LINK nvmf_tgt 00:02:26.736 LINK interrupt_tgt 00:02:27.000 LINK vhost 00:02:27.000 LINK histogram_perf 00:02:27.000 LINK iscsi_tgt 00:02:27.000 LINK led 00:02:27.000 LINK app_repeat 00:02:27.000 LINK event_perf 00:02:27.000 LINK zipf 00:02:27.000 LINK vtophys 00:02:27.000 LINK boot_partition 00:02:27.000 LINK jsoncat 00:02:27.000 LINK env_dpdk_post_init 00:02:27.000 LINK lsvmd 00:02:27.000 LINK poller_perf 00:02:27.000 LINK reactor 00:02:27.000 LINK reactor_perf 00:02:27.000 LINK verify 00:02:27.000 LINK spdk_tgt 00:02:27.000 LINK connect_stress 00:02:27.000 LINK spdk_trace_record 00:02:27.000 CXX test/cpp_headers/endian.o 00:02:27.000 LINK err_injection 00:02:27.000 LINK doorbell_aers 00:02:27.000 CXX test/cpp_headers/env_dpdk.o 00:02:27.000 LINK hello_sock 00:02:27.000 LINK stub 00:02:27.000 CXX test/cpp_headers/env.o 00:02:27.000 LINK cmb_copy 00:02:27.000 CXX test/cpp_headers/event.o 00:02:27.274 CXX test/cpp_headers/fd_group.o 00:02:27.274 LINK bdev_svc 00:02:27.274 LINK hotplug 00:02:27.274 LINK mkfs 00:02:27.274 LINK pmr_persistence 00:02:27.274 LINK simple_copy 00:02:27.274 CXX test/cpp_headers/fd.o 00:02:27.274 LINK scheduler 00:02:27.274 CXX test/cpp_headers/file.o 00:02:27.274 LINK startup 00:02:27.274 LINK hello_world 00:02:27.274 LINK hello_blob 00:02:27.274 CXX test/cpp_headers/ftl.o 00:02:27.274 LINK reset 00:02:27.274 LINK reserve 00:02:27.274 LINK ioat_perf 00:02:27.274 CXX test/cpp_headers/gpt_spec.o 00:02:27.274 CXX test/cpp_headers/hexlify.o 00:02:27.274 LINK hello_bdev 00:02:27.274 CXX test/cpp_headers/histogram_data.o 00:02:27.274 CXX test/cpp_headers/idxd.o 00:02:27.274 CXX test/cpp_headers/idxd_spec.o 00:02:27.274 CXX test/cpp_headers/init.o 00:02:27.274 CXX test/cpp_headers/ioat.o 00:02:27.274 CXX test/cpp_headers/ioat_spec.o 00:02:27.274 CXX test/cpp_headers/iscsi_spec.o 00:02:27.274 LINK fused_ordering 00:02:27.274 CXX test/cpp_headers/json.o 00:02:27.275 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:27.275 CXX test/cpp_headers/jsonrpc.o 00:02:27.275 LINK aer 00:02:27.275 LINK nvmf 00:02:27.275 LINK sgl 00:02:27.275 LINK spdk_dd 00:02:27.275 CXX test/cpp_headers/likely.o 00:02:27.275 LINK overhead 00:02:27.275 LINK nvme_dp 00:02:27.275 LINK thread 00:02:27.275 CXX test/cpp_headers/log.o 00:02:27.275 CXX test/cpp_headers/lvol.o 00:02:27.275 LINK reconnect 00:02:27.275 LINK nvme_compliance 00:02:27.275 CXX test/cpp_headers/memory.o 00:02:27.275 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:27.275 CXX test/cpp_headers/mmio.o 00:02:27.275 CXX test/cpp_headers/nbd.o 00:02:27.275 LINK arbitration 00:02:27.275 CXX test/cpp_headers/notify.o 00:02:27.275 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:27.275 CXX test/cpp_headers/nvme.o 00:02:27.275 LINK fdp 00:02:27.539 CXX test/cpp_headers/nvme_intel.o 00:02:27.539 LINK bdevio 00:02:27.539 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.539 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.539 LINK test_dma 00:02:27.539 LINK pci_ut 00:02:27.539 LINK abort 00:02:27.539 CXX test/cpp_headers/nvme_spec.o 00:02:27.539 CXX test/cpp_headers/nvme_zns.o 00:02:27.539 LINK dif 00:02:27.539 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.539 LINK idxd_perf 00:02:27.539 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.539 CXX test/cpp_headers/nvmf.o 00:02:27.539 CXX test/cpp_headers/nvmf_spec.o 00:02:27.539 LINK accel_perf 00:02:27.539 CXX test/cpp_headers/nvmf_transport.o 00:02:27.539 CXX test/cpp_headers/opal.o 00:02:27.539 LINK spdk_trace 00:02:27.539 CXX test/cpp_headers/opal_spec.o 00:02:27.539 CXX test/cpp_headers/pci_ids.o 00:02:27.539 CXX test/cpp_headers/pipe.o 00:02:27.539 CXX test/cpp_headers/queue.o 00:02:27.539 CXX test/cpp_headers/reduce.o 00:02:27.539 CXX test/cpp_headers/rpc.o 00:02:27.539 CXX test/cpp_headers/scheduler.o 00:02:27.539 CXX test/cpp_headers/scsi.o 00:02:27.539 CXX test/cpp_headers/scsi_spec.o 00:02:27.540 CXX test/cpp_headers/sock.o 00:02:27.540 CXX test/cpp_headers/stdinc.o 00:02:27.540 CXX test/cpp_headers/string.o 00:02:27.540 CXX test/cpp_headers/thread.o 00:02:27.540 CXX test/cpp_headers/trace.o 00:02:27.540 CXX test/cpp_headers/trace_parser.o 00:02:27.540 CXX test/cpp_headers/tree.o 00:02:27.540 CXX test/cpp_headers/ublk.o 00:02:27.540 CXX test/cpp_headers/util.o 00:02:27.540 CXX test/cpp_headers/uuid.o 00:02:27.801 LINK nvme_manage 00:02:27.801 CXX test/cpp_headers/version.o 00:02:27.801 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.801 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.801 CXX test/cpp_headers/vhost.o 00:02:27.801 CXX test/cpp_headers/vmd.o 00:02:27.801 CXX test/cpp_headers/xor.o 00:02:27.801 CXX test/cpp_headers/zipf.o 00:02:27.801 LINK nvme_fuzz 00:02:27.801 LINK blobcli 00:02:27.801 LINK mem_callbacks 00:02:27.801 LINK spdk_nvme_identify 00:02:27.801 LINK spdk_bdev 00:02:27.801 LINK spdk_nvme 00:02:28.062 LINK vhost_fuzz 00:02:28.062 LINK spdk_nvme_perf 00:02:28.062 LINK bdevperf 00:02:28.062 LINK memory_ut 00:02:28.062 LINK spdk_top 00:02:28.323 LINK cuse 00:02:28.893 LINK iscsi_fuzz 00:02:30.804 LINK esnap 00:02:30.804 00:02:30.804 real 0m42.171s 00:02:30.804 user 6m50.158s 00:02:30.804 sys 3m12.143s 00:02:30.804 05:00:27 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:30.804 05:00:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.804 ************************************ 00:02:30.804 END TEST make 00:02:30.804 ************************************ 00:02:30.804 05:00:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:30.804 05:00:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:30.804 05:00:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:31.065 05:00:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:31.065 05:00:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:31.065 05:00:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:31.065 05:00:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:31.065 05:00:27 -- scripts/common.sh@335 -- # IFS=.-: 00:02:31.065 05:00:27 -- scripts/common.sh@335 -- # read -ra ver1 00:02:31.065 05:00:27 -- scripts/common.sh@336 -- # IFS=.-: 00:02:31.065 05:00:27 -- scripts/common.sh@336 -- # read -ra ver2 00:02:31.065 05:00:27 -- scripts/common.sh@337 -- # local 'op=<' 00:02:31.065 05:00:27 -- scripts/common.sh@339 -- # ver1_l=2 00:02:31.065 05:00:27 -- scripts/common.sh@340 -- # ver2_l=1 00:02:31.066 05:00:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:31.066 05:00:27 -- scripts/common.sh@343 -- # case "$op" in 00:02:31.066 05:00:27 -- scripts/common.sh@344 -- # : 1 00:02:31.066 05:00:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:31.066 05:00:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.066 05:00:27 -- scripts/common.sh@364 -- # decimal 1 00:02:31.066 05:00:27 -- scripts/common.sh@352 -- # local d=1 00:02:31.066 05:00:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:31.066 05:00:27 -- scripts/common.sh@354 -- # echo 1 00:02:31.066 05:00:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:31.066 05:00:27 -- scripts/common.sh@365 -- # decimal 2 00:02:31.066 05:00:27 -- scripts/common.sh@352 -- # local d=2 00:02:31.066 05:00:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:31.066 05:00:27 -- scripts/common.sh@354 -- # echo 2 00:02:31.066 05:00:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:31.066 05:00:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:31.066 05:00:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:31.066 05:00:27 -- scripts/common.sh@367 -- # return 0 00:02:31.066 05:00:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:31.066 05:00:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:31.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.066 --rc genhtml_branch_coverage=1 00:02:31.066 --rc genhtml_function_coverage=1 00:02:31.066 --rc genhtml_legend=1 00:02:31.066 --rc geninfo_all_blocks=1 00:02:31.066 --rc geninfo_unexecuted_blocks=1 00:02:31.066 00:02:31.066 ' 00:02:31.066 05:00:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:31.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.066 --rc genhtml_branch_coverage=1 00:02:31.066 --rc genhtml_function_coverage=1 00:02:31.066 --rc genhtml_legend=1 00:02:31.066 --rc geninfo_all_blocks=1 00:02:31.066 --rc geninfo_unexecuted_blocks=1 00:02:31.066 00:02:31.066 ' 00:02:31.066 05:00:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:31.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.066 --rc genhtml_branch_coverage=1 00:02:31.066 --rc genhtml_function_coverage=1 00:02:31.066 --rc genhtml_legend=1 00:02:31.066 --rc geninfo_all_blocks=1 00:02:31.066 --rc geninfo_unexecuted_blocks=1 00:02:31.066 00:02:31.066 ' 00:02:31.066 05:00:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:31.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.066 --rc genhtml_branch_coverage=1 00:02:31.066 --rc genhtml_function_coverage=1 00:02:31.066 --rc genhtml_legend=1 00:02:31.066 --rc geninfo_all_blocks=1 00:02:31.066 --rc geninfo_unexecuted_blocks=1 00:02:31.066 00:02:31.066 ' 00:02:31.066 05:00:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:02:31.066 05:00:27 -- nvmf/common.sh@7 -- # uname -s 00:02:31.066 05:00:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:31.066 05:00:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:31.066 05:00:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:31.066 05:00:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:31.066 05:00:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:31.066 05:00:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:31.066 05:00:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:31.066 05:00:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:31.066 05:00:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:31.066 05:00:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:31.066 05:00:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:02:31.066 05:00:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:02:31.066 05:00:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:31.066 05:00:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:31.066 05:00:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:31.066 05:00:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:02:31.066 05:00:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:31.066 05:00:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.066 05:00:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.066 05:00:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.066 05:00:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.066 05:00:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.066 05:00:27 -- paths/export.sh@5 -- # export PATH 00:02:31.066 05:00:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.066 05:00:27 -- nvmf/common.sh@46 -- # : 0 00:02:31.066 05:00:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:31.066 05:00:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:31.066 05:00:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:31.066 05:00:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:31.066 05:00:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:31.066 05:00:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:31.066 05:00:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:31.066 05:00:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:31.066 05:00:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:31.066 05:00:27 -- spdk/autotest.sh@32 -- # uname -s 00:02:31.066 05:00:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:31.066 05:00:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:31.066 05:00:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:02:31.066 05:00:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:31.066 05:00:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:02:31.066 05:00:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:31.066 05:00:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:31.066 05:00:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:31.066 05:00:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:31.066 05:00:27 -- spdk/autotest.sh@48 -- # udevadm_pid=52154 00:02:31.066 05:00:27 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:02:31.066 05:00:27 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:02:31.066 05:00:27 -- spdk/autotest.sh@54 -- # echo 52156 00:02:31.066 05:00:27 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:02:31.066 05:00:27 -- spdk/autotest.sh@56 -- # echo 52157 00:02:31.066 05:00:27 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:31.066 05:00:27 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l 00:02:31.066 05:00:27 -- spdk/autotest.sh@60 -- # echo 52158 00:02:31.066 05:00:27 -- spdk/autotest.sh@62 -- # echo 52159 00:02:31.066 05:00:27 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l 00:02:31.066 05:00:27 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:31.066 05:00:27 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:31.066 05:00:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:31.066 05:00:27 -- common/autotest_common.sh@10 -- # set +x 00:02:31.066 05:00:27 -- spdk/autotest.sh@70 -- # create_test_list 00:02:31.066 05:00:27 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:31.066 05:00:27 -- common/autotest_common.sh@10 -- # set +x 00:02:31.066 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:31.067 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:31.067 05:00:27 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/autotest.sh 00:02:31.067 05:00:27 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:31.067 05:00:27 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:31.067 05:00:27 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:02:31.067 05:00:27 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:31.067 05:00:27 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:31.067 05:00:27 -- common/autotest_common.sh@1450 -- # uname 00:02:31.067 05:00:27 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:31.067 05:00:27 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:31.067 05:00:27 -- common/autotest_common.sh@1470 -- # uname 00:02:31.067 05:00:27 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:31.067 05:00:27 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:31.067 05:00:27 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:31.327 lcov: LCOV version 1.15 00:02:31.327 05:00:27 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info 00:02:33.864 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:33.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:33.864 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:33.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:33.864 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:33.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:55.814 05:00:49 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:55.814 05:00:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:55.814 05:00:49 -- common/autotest_common.sh@10 -- # set +x 00:02:55.814 05:00:49 -- spdk/autotest.sh@89 -- # rm -f 00:02:55.814 05:00:49 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.814 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:02:55.814 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:55.814 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.814 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.075 05:00:52 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:02:56.075 05:00:52 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:56.075 05:00:52 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:56.075 05:00:52 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:56.075 05:00:52 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:56.075 05:00:52 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:56.075 05:00:52 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:56.075 05:00:52 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.075 05:00:52 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:56.075 05:00:52 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:56.075 05:00:52 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:02:56.075 05:00:52 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:02:56.075 05:00:52 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:56.075 05:00:52 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:56.075 05:00:52 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:56.075 05:00:52 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:02:56.075 05:00:52 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:02:56.075 05:00:52 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:56.075 05:00:52 -- common/autotest_common.sh@1660 -- # [[ host-managed != none ]] 00:02:56.075 05:00:52 -- common/autotest_common.sh@1669 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:56.075 05:00:52 -- spdk/autotest.sh@96 -- # (( 1 > 0 )) 00:02:56.075 05:00:52 -- spdk/autotest.sh@101 -- # export PCI_BLOCKED=0000:5f:00.0 00:02:56.075 05:00:52 -- spdk/autotest.sh@101 -- # PCI_BLOCKED=0000:5f:00.0 00:02:56.075 05:00:52 -- spdk/autotest.sh@102 -- # export PCI_ZONED=0000:5f:00.0 00:02:56.075 05:00:52 -- spdk/autotest.sh@102 -- # PCI_ZONED=0000:5f:00.0 00:02:56.075 05:00:52 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 00:02:56.075 05:00:52 -- spdk/autotest.sh@108 -- # grep -v p 00:02:56.075 05:00:52 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:56.075 05:00:52 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:56.075 05:00:52 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:02:56.075 05:00:52 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:56.075 05:00:52 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:56.075 No valid GPT data, bailing 00:02:56.075 05:00:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:56.075 05:00:52 -- scripts/common.sh@393 -- # pt= 00:02:56.075 05:00:52 -- scripts/common.sh@394 -- # return 1 00:02:56.075 05:00:52 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:56.075 1+0 records in 00:02:56.075 1+0 records out 00:02:56.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534272 s, 196 MB/s 00:02:56.075 05:00:52 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:56.075 05:00:52 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:56.075 05:00:52 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:02:56.075 05:00:52 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:56.075 05:00:52 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:56.075 No valid GPT data, bailing 00:02:56.075 05:00:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:56.075 05:00:52 -- scripts/common.sh@393 -- # pt= 00:02:56.075 05:00:52 -- scripts/common.sh@394 -- # return 1 00:02:56.075 05:00:52 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:56.075 1+0 records in 00:02:56.075 1+0 records out 00:02:56.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0014965 s, 701 MB/s 00:02:56.075 05:00:52 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:56.075 05:00:52 -- spdk/autotest.sh@110 -- # [[ -z 0000:5f:00.0 ]] 00:02:56.075 05:00:52 -- spdk/autotest.sh@110 -- # continue 00:02:56.075 05:00:52 -- spdk/autotest.sh@116 -- # sync 00:02:56.075 05:00:52 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:56.075 05:00:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:56.075 05:00:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:02.667 05:00:58 -- spdk/autotest.sh@122 -- # uname -s 00:03:02.667 05:00:58 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:02.667 05:00:58 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.667 05:00:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.667 05:00:58 -- common/autotest_common.sh@10 -- # set +x 00:03:02.667 ************************************ 00:03:02.667 START TEST setup.sh 00:03:02.667 ************************************ 00:03:02.667 05:00:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.667 * Looking for test storage... 00:03:02.667 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:02.667 05:00:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:02.667 05:00:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:02.667 05:00:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:02.667 05:00:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:02.667 05:00:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:02.667 05:00:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:02.667 05:00:58 -- scripts/common.sh@335 -- # IFS=.-: 00:03:02.667 05:00:58 -- scripts/common.sh@335 -- # read -ra ver1 00:03:02.667 05:00:58 -- scripts/common.sh@336 -- # IFS=.-: 00:03:02.667 05:00:58 -- scripts/common.sh@336 -- # read -ra ver2 00:03:02.667 05:00:58 -- scripts/common.sh@337 -- # local 'op=<' 00:03:02.667 05:00:58 -- scripts/common.sh@339 -- # ver1_l=2 00:03:02.667 05:00:58 -- scripts/common.sh@340 -- # ver2_l=1 00:03:02.667 05:00:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:02.667 05:00:58 -- scripts/common.sh@343 -- # case "$op" in 00:03:02.667 05:00:58 -- scripts/common.sh@344 -- # : 1 00:03:02.667 05:00:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:02.667 05:00:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.667 05:00:58 -- scripts/common.sh@364 -- # decimal 1 00:03:02.667 05:00:58 -- scripts/common.sh@352 -- # local d=1 00:03:02.667 05:00:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:02.667 05:00:58 -- scripts/common.sh@354 -- # echo 1 00:03:02.667 05:00:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:02.667 05:00:58 -- scripts/common.sh@365 -- # decimal 2 00:03:02.667 05:00:58 -- scripts/common.sh@352 -- # local d=2 00:03:02.667 05:00:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:02.667 05:00:58 -- scripts/common.sh@354 -- # echo 2 00:03:02.667 05:00:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:02.667 05:00:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:02.667 05:00:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:02.667 05:00:58 -- scripts/common.sh@367 -- # return 0 00:03:02.667 05:00:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:02.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.667 --rc genhtml_branch_coverage=1 00:03:02.667 --rc genhtml_function_coverage=1 00:03:02.667 --rc genhtml_legend=1 00:03:02.667 --rc geninfo_all_blocks=1 00:03:02.667 --rc geninfo_unexecuted_blocks=1 00:03:02.667 00:03:02.667 ' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:02.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.667 --rc genhtml_branch_coverage=1 00:03:02.667 --rc genhtml_function_coverage=1 00:03:02.667 --rc genhtml_legend=1 00:03:02.667 --rc geninfo_all_blocks=1 00:03:02.667 --rc geninfo_unexecuted_blocks=1 00:03:02.667 00:03:02.667 ' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:02.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.667 --rc genhtml_branch_coverage=1 00:03:02.667 --rc genhtml_function_coverage=1 00:03:02.667 --rc genhtml_legend=1 00:03:02.667 --rc geninfo_all_blocks=1 00:03:02.667 --rc geninfo_unexecuted_blocks=1 00:03:02.667 00:03:02.667 ' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:02.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.667 --rc genhtml_branch_coverage=1 00:03:02.667 --rc genhtml_function_coverage=1 00:03:02.667 --rc genhtml_legend=1 00:03:02.667 --rc geninfo_all_blocks=1 00:03:02.667 --rc geninfo_unexecuted_blocks=1 00:03:02.667 00:03:02.667 ' 00:03:02.667 05:00:58 -- setup/test-setup.sh@10 -- # uname -s 00:03:02.667 05:00:58 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:02.667 05:00:58 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/acl.sh 00:03:02.667 05:00:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.667 05:00:58 -- common/autotest_common.sh@10 -- # set +x 00:03:02.667 ************************************ 00:03:02.667 START TEST acl 00:03:02.667 ************************************ 00:03:02.667 05:00:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/acl.sh 00:03:02.667 * Looking for test storage... 00:03:02.667 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:02.667 05:00:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:02.667 05:00:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:02.667 05:00:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:02.667 05:00:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:02.667 05:00:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:02.667 05:00:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:02.667 05:00:58 -- scripts/common.sh@335 -- # IFS=.-: 00:03:02.667 05:00:58 -- scripts/common.sh@335 -- # read -ra ver1 00:03:02.667 05:00:58 -- scripts/common.sh@336 -- # IFS=.-: 00:03:02.667 05:00:58 -- scripts/common.sh@336 -- # read -ra ver2 00:03:02.667 05:00:58 -- scripts/common.sh@337 -- # local 'op=<' 00:03:02.667 05:00:58 -- scripts/common.sh@339 -- # ver1_l=2 00:03:02.667 05:00:58 -- scripts/common.sh@340 -- # ver2_l=1 00:03:02.667 05:00:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:02.667 05:00:58 -- scripts/common.sh@343 -- # case "$op" in 00:03:02.667 05:00:58 -- scripts/common.sh@344 -- # : 1 00:03:02.667 05:00:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:02.667 05:00:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.667 05:00:58 -- scripts/common.sh@364 -- # decimal 1 00:03:02.667 05:00:58 -- scripts/common.sh@352 -- # local d=1 00:03:02.667 05:00:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:02.667 05:00:58 -- scripts/common.sh@354 -- # echo 1 00:03:02.667 05:00:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:02.667 05:00:58 -- scripts/common.sh@365 -- # decimal 2 00:03:02.667 05:00:58 -- scripts/common.sh@352 -- # local d=2 00:03:02.667 05:00:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:02.667 05:00:58 -- scripts/common.sh@354 -- # echo 2 00:03:02.667 05:00:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:02.667 05:00:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:02.667 05:00:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:02.667 05:00:58 -- scripts/common.sh@367 -- # return 0 00:03:02.667 05:00:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:02.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.667 --rc genhtml_branch_coverage=1 00:03:02.667 --rc genhtml_function_coverage=1 00:03:02.667 --rc genhtml_legend=1 00:03:02.667 --rc geninfo_all_blocks=1 00:03:02.667 --rc geninfo_unexecuted_blocks=1 00:03:02.667 00:03:02.667 ' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:02.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.667 --rc genhtml_branch_coverage=1 00:03:02.667 --rc genhtml_function_coverage=1 00:03:02.667 --rc genhtml_legend=1 00:03:02.667 --rc geninfo_all_blocks=1 00:03:02.667 --rc geninfo_unexecuted_blocks=1 00:03:02.667 00:03:02.667 ' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:02.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.667 --rc genhtml_branch_coverage=1 00:03:02.667 --rc genhtml_function_coverage=1 00:03:02.667 --rc genhtml_legend=1 00:03:02.667 --rc geninfo_all_blocks=1 00:03:02.667 --rc geninfo_unexecuted_blocks=1 00:03:02.667 00:03:02.667 ' 00:03:02.667 05:00:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:02.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.667 --rc genhtml_branch_coverage=1 00:03:02.667 --rc genhtml_function_coverage=1 00:03:02.667 --rc genhtml_legend=1 00:03:02.667 --rc geninfo_all_blocks=1 00:03:02.667 --rc geninfo_unexecuted_blocks=1 00:03:02.667 00:03:02.667 ' 00:03:02.667 05:00:58 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:02.667 05:00:58 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:02.667 05:00:58 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:02.668 05:00:58 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:02.668 05:00:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:02.668 05:00:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:02.668 05:00:58 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:02.668 05:00:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.668 05:00:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:02.668 05:00:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:02.668 05:00:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:02.668 05:00:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:02.668 05:00:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:02.668 05:00:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:02.668 05:00:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:02.668 05:00:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:02.668 05:00:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:02.668 05:00:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:02.668 05:00:58 -- common/autotest_common.sh@1660 -- # [[ host-managed != none ]] 00:03:02.668 05:00:58 -- common/autotest_common.sh@1669 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:02.668 05:00:58 -- setup/acl.sh@12 -- # devs=() 00:03:02.668 05:00:58 -- setup/acl.sh@12 -- # declare -a devs 00:03:02.668 05:00:58 -- setup/acl.sh@13 -- # drivers=() 00:03:02.668 05:00:58 -- setup/acl.sh@13 -- # declare -A drivers 00:03:02.668 05:00:58 -- setup/acl.sh@51 -- # setup reset 00:03:02.668 05:00:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.668 05:00:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.967 05:01:02 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:05.967 05:01:02 -- setup/acl.sh@16 -- # local dev driver 00:03:05.967 05:01:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.967 05:01:02 -- setup/acl.sh@15 -- # setup output status 00:03:05.967 05:01:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.967 05:01:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:03:08.511 Hugepages 00:03:08.511 node hugesize free / total 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 00:03:08.511 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:04 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.511 05:01:04 -- setup/acl.sh@20 -- # continue 00:03:08.511 05:01:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:08.511 05:01:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:08.511 05:01:05 -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:08.511 05:01:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:08.511 05:01:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:08.511 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.511 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:08.511 05:01:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:08.511 05:01:05 -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:08.511 05:01:05 -- setup/acl.sh@21 -- # continue 00:03:08.511 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # continue 00:03:08.512 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # continue 00:03:08.512 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # continue 00:03:08.512 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # continue 00:03:08.512 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # continue 00:03:08.512 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # continue 00:03:08.512 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # continue 00:03:08.512 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.512 05:01:05 -- setup/acl.sh@20 -- # continue 00:03:08.512 05:01:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.512 05:01:05 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:08.512 05:01:05 -- setup/acl.sh@54 -- # run_test denied denied 00:03:08.512 05:01:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.512 05:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.512 05:01:05 -- common/autotest_common.sh@10 -- # set +x 00:03:08.512 ************************************ 00:03:08.512 START TEST denied 00:03:08.512 ************************************ 00:03:08.512 05:01:05 -- common/autotest_common.sh@1114 -- # denied 00:03:08.512 05:01:05 -- setup/acl.sh@38 -- # PCI_BLOCKED='0000:5f:00.0 0000:5e:00.0' 00:03:08.512 05:01:05 -- setup/acl.sh@38 -- # setup output config 00:03:08.512 05:01:05 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:08.512 05:01:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.512 05:01:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:11.806 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:11.806 05:01:08 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:11.806 05:01:08 -- setup/acl.sh@28 -- # local dev driver 00:03:11.806 05:01:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:11.806 05:01:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:11.806 05:01:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:11.806 05:01:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:11.806 05:01:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:11.806 05:01:08 -- setup/acl.sh@41 -- # setup reset 00:03:11.806 05:01:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.806 05:01:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.007 00:03:16.007 real 0m7.608s 00:03:16.007 user 0m2.570s 00:03:16.007 sys 0m4.336s 00:03:16.007 05:01:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:16.007 05:01:12 -- common/autotest_common.sh@10 -- # set +x 00:03:16.007 ************************************ 00:03:16.007 END TEST denied 00:03:16.007 ************************************ 00:03:16.007 05:01:12 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:16.007 05:01:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:16.007 05:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:16.007 05:01:12 -- common/autotest_common.sh@10 -- # set +x 00:03:16.007 ************************************ 00:03:16.007 START TEST allowed 00:03:16.007 ************************************ 00:03:16.007 05:01:12 -- common/autotest_common.sh@1114 -- # allowed 00:03:16.007 05:01:12 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:16.007 05:01:12 -- setup/acl.sh@45 -- # setup output config 00:03:16.007 05:01:12 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:16.007 05:01:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.007 05:01:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:20.211 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.211 05:01:16 -- setup/acl.sh@47 -- # verify 00:03:20.211 05:01:16 -- setup/acl.sh@28 -- # local dev driver 00:03:20.211 05:01:16 -- setup/acl.sh@48 -- # setup reset 00:03:20.211 05:01:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.211 05:01:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.412 00:03:24.412 real 0m7.570s 00:03:24.412 user 0m2.448s 00:03:24.412 sys 0m4.311s 00:03:24.412 05:01:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:24.412 05:01:20 -- common/autotest_common.sh@10 -- # set +x 00:03:24.412 ************************************ 00:03:24.412 END TEST allowed 00:03:24.412 ************************************ 00:03:24.412 00:03:24.412 real 0m21.771s 00:03:24.412 user 0m7.576s 00:03:24.412 sys 0m12.921s 00:03:24.412 05:01:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:24.412 05:01:20 -- common/autotest_common.sh@10 -- # set +x 00:03:24.412 ************************************ 00:03:24.412 END TEST acl 00:03:24.412 ************************************ 00:03:24.412 05:01:20 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/hugepages.sh 00:03:24.412 05:01:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.412 05:01:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.412 05:01:20 -- common/autotest_common.sh@10 -- # set +x 00:03:24.412 ************************************ 00:03:24.412 START TEST hugepages 00:03:24.412 ************************************ 00:03:24.412 05:01:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/hugepages.sh 00:03:24.412 * Looking for test storage... 00:03:24.412 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:24.412 05:01:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:24.412 05:01:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:24.412 05:01:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:24.412 05:01:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:24.412 05:01:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:24.412 05:01:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:24.412 05:01:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:24.412 05:01:20 -- scripts/common.sh@335 -- # IFS=.-: 00:03:24.412 05:01:20 -- scripts/common.sh@335 -- # read -ra ver1 00:03:24.412 05:01:20 -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.412 05:01:20 -- scripts/common.sh@336 -- # read -ra ver2 00:03:24.412 05:01:20 -- scripts/common.sh@337 -- # local 'op=<' 00:03:24.412 05:01:20 -- scripts/common.sh@339 -- # ver1_l=2 00:03:24.412 05:01:20 -- scripts/common.sh@340 -- # ver2_l=1 00:03:24.412 05:01:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:24.412 05:01:20 -- scripts/common.sh@343 -- # case "$op" in 00:03:24.412 05:01:20 -- scripts/common.sh@344 -- # : 1 00:03:24.412 05:01:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:24.412 05:01:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.412 05:01:20 -- scripts/common.sh@364 -- # decimal 1 00:03:24.412 05:01:20 -- scripts/common.sh@352 -- # local d=1 00:03:24.412 05:01:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.412 05:01:20 -- scripts/common.sh@354 -- # echo 1 00:03:24.412 05:01:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:24.412 05:01:20 -- scripts/common.sh@365 -- # decimal 2 00:03:24.413 05:01:20 -- scripts/common.sh@352 -- # local d=2 00:03:24.413 05:01:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.413 05:01:20 -- scripts/common.sh@354 -- # echo 2 00:03:24.413 05:01:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:24.413 05:01:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:24.413 05:01:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:24.413 05:01:20 -- scripts/common.sh@367 -- # return 0 00:03:24.413 05:01:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.413 05:01:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:24.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.413 --rc genhtml_branch_coverage=1 00:03:24.413 --rc genhtml_function_coverage=1 00:03:24.413 --rc genhtml_legend=1 00:03:24.413 --rc geninfo_all_blocks=1 00:03:24.413 --rc geninfo_unexecuted_blocks=1 00:03:24.413 00:03:24.413 ' 00:03:24.413 05:01:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:24.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.413 --rc genhtml_branch_coverage=1 00:03:24.413 --rc genhtml_function_coverage=1 00:03:24.413 --rc genhtml_legend=1 00:03:24.413 --rc geninfo_all_blocks=1 00:03:24.413 --rc geninfo_unexecuted_blocks=1 00:03:24.413 00:03:24.413 ' 00:03:24.413 05:01:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:24.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.413 --rc genhtml_branch_coverage=1 00:03:24.413 --rc genhtml_function_coverage=1 00:03:24.413 --rc genhtml_legend=1 00:03:24.413 --rc geninfo_all_blocks=1 00:03:24.413 --rc geninfo_unexecuted_blocks=1 00:03:24.413 00:03:24.413 ' 00:03:24.413 05:01:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:24.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.413 --rc genhtml_branch_coverage=1 00:03:24.413 --rc genhtml_function_coverage=1 00:03:24.413 --rc genhtml_legend=1 00:03:24.413 --rc geninfo_all_blocks=1 00:03:24.413 --rc geninfo_unexecuted_blocks=1 00:03:24.413 00:03:24.413 ' 00:03:24.413 05:01:20 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:24.413 05:01:20 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:24.413 05:01:20 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:24.413 05:01:20 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:24.413 05:01:20 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:24.413 05:01:20 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:24.413 05:01:20 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:24.413 05:01:20 -- setup/common.sh@18 -- # local node= 00:03:24.413 05:01:20 -- setup/common.sh@19 -- # local var val 00:03:24.413 05:01:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.413 05:01:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.413 05:01:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.413 05:01:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.413 05:01:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.413 05:01:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 75862832 kB' 'MemAvailable: 79502124 kB' 'Buffers: 9380 kB' 'Cached: 9595688 kB' 'SwapCached: 0 kB' 'Active: 6436764 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063740 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598204 kB' 'Mapped: 142992 kB' 'Shmem: 5469012 kB' 'KReclaimable: 198828 kB' 'Slab: 716844 kB' 'SReclaimable: 198828 kB' 'SUnreclaim: 518016 kB' 'KernelStack: 21072 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52947896 kB' 'Committed_AS: 8433296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219308 kB' 'VmallocChunk: 0 kB' 'Percpu: 60672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.413 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.413 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # continue 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.414 05:01:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.414 05:01:20 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.414 05:01:20 -- setup/common.sh@33 -- # echo 2048 00:03:24.414 05:01:20 -- setup/common.sh@33 -- # return 0 00:03:24.414 05:01:20 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:24.414 05:01:20 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:24.414 05:01:20 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:24.414 05:01:20 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:24.414 05:01:20 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:24.414 05:01:20 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:24.414 05:01:20 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:24.414 05:01:20 -- setup/hugepages.sh@207 -- # get_nodes 00:03:24.414 05:01:20 -- setup/hugepages.sh@27 -- # local node 00:03:24.414 05:01:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.414 05:01:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:24.414 05:01:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.414 05:01:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.414 05:01:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.414 05:01:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.414 05:01:20 -- setup/hugepages.sh@208 -- # clear_hp 00:03:24.414 05:01:20 -- setup/hugepages.sh@37 -- # local node hp 00:03:24.414 05:01:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.414 05:01:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.414 05:01:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.414 05:01:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.414 05:01:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.414 05:01:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.414 05:01:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.414 05:01:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.414 05:01:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.414 05:01:20 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.414 05:01:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:24.414 05:01:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:24.414 05:01:20 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:24.414 05:01:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.414 05:01:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.414 05:01:20 -- common/autotest_common.sh@10 -- # set +x 00:03:24.414 ************************************ 00:03:24.414 START TEST default_setup 00:03:24.414 ************************************ 00:03:24.414 05:01:20 -- common/autotest_common.sh@1114 -- # default_setup 00:03:24.414 05:01:20 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:24.414 05:01:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.414 05:01:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.415 05:01:20 -- setup/hugepages.sh@51 -- # shift 00:03:24.415 05:01:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.415 05:01:20 -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.415 05:01:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.415 05:01:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.415 05:01:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.415 05:01:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.415 05:01:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.415 05:01:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.415 05:01:20 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.415 05:01:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.415 05:01:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.415 05:01:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.415 05:01:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.415 05:01:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.415 05:01:20 -- setup/hugepages.sh@73 -- # return 0 00:03:24.415 05:01:20 -- setup/hugepages.sh@137 -- # setup output 00:03:24.415 05:01:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.415 05:01:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:26.958 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:26.958 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:26.958 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:26.958 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:26.958 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:26.958 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:26.958 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:27.219 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.164 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.164 05:01:24 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:28.164 05:01:24 -- setup/hugepages.sh@89 -- # local node 00:03:28.164 05:01:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.164 05:01:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.164 05:01:24 -- setup/hugepages.sh@92 -- # local surp 00:03:28.164 05:01:24 -- setup/hugepages.sh@93 -- # local resv 00:03:28.164 05:01:24 -- setup/hugepages.sh@94 -- # local anon 00:03:28.164 05:01:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.164 05:01:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.164 05:01:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.164 05:01:24 -- setup/common.sh@18 -- # local node= 00:03:28.164 05:01:24 -- setup/common.sh@19 -- # local var val 00:03:28.164 05:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.164 05:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.164 05:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.164 05:01:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.164 05:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.164 05:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78037224 kB' 'MemAvailable: 81675928 kB' 'Buffers: 9380 kB' 'Cached: 9595804 kB' 'SwapCached: 0 kB' 'Active: 6436448 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063424 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597496 kB' 'Mapped: 142864 kB' 'Shmem: 5469128 kB' 'KReclaimable: 197652 kB' 'Slab: 715956 kB' 'SReclaimable: 197652 kB' 'SUnreclaim: 518304 kB' 'KernelStack: 21088 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8435588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219564 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.164 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.164 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.165 05:01:24 -- setup/common.sh@33 -- # echo 0 00:03:28.165 05:01:24 -- setup/common.sh@33 -- # return 0 00:03:28.165 05:01:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:28.165 05:01:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.165 05:01:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.165 05:01:24 -- setup/common.sh@18 -- # local node= 00:03:28.165 05:01:24 -- setup/common.sh@19 -- # local var val 00:03:28.165 05:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.165 05:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.165 05:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.165 05:01:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.165 05:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.165 05:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78037948 kB' 'MemAvailable: 81676652 kB' 'Buffers: 9380 kB' 'Cached: 9595808 kB' 'SwapCached: 0 kB' 'Active: 6435732 kB' 'Inactive: 3763032 kB' 'Active(anon): 6062708 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596800 kB' 'Mapped: 142848 kB' 'Shmem: 5469132 kB' 'KReclaimable: 197652 kB' 'Slab: 715988 kB' 'SReclaimable: 197652 kB' 'SUnreclaim: 518336 kB' 'KernelStack: 21152 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8435600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219564 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.165 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.165 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.166 05:01:24 -- setup/common.sh@33 -- # echo 0 00:03:28.166 05:01:24 -- setup/common.sh@33 -- # return 0 00:03:28.166 05:01:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.166 05:01:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.166 05:01:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.166 05:01:24 -- setup/common.sh@18 -- # local node= 00:03:28.166 05:01:24 -- setup/common.sh@19 -- # local var val 00:03:28.166 05:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.166 05:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.166 05:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.166 05:01:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.166 05:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.166 05:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 05:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78036872 kB' 'MemAvailable: 81675576 kB' 'Buffers: 9380 kB' 'Cached: 9595820 kB' 'SwapCached: 0 kB' 'Active: 6436312 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063288 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597416 kB' 'Mapped: 142756 kB' 'Shmem: 5469144 kB' 'KReclaimable: 197652 kB' 'Slab: 715972 kB' 'SReclaimable: 197652 kB' 'SUnreclaim: 518320 kB' 'KernelStack: 21312 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8435616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219660 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.167 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.167 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.168 05:01:24 -- setup/common.sh@33 -- # echo 0 00:03:28.168 05:01:24 -- setup/common.sh@33 -- # return 0 00:03:28.168 05:01:24 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.168 05:01:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.168 nr_hugepages=1024 00:03:28.168 05:01:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.168 resv_hugepages=0 00:03:28.168 05:01:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.168 surplus_hugepages=0 00:03:28.168 05:01:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.168 anon_hugepages=0 00:03:28.168 05:01:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.168 05:01:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.168 05:01:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.168 05:01:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.168 05:01:24 -- setup/common.sh@18 -- # local node= 00:03:28.168 05:01:24 -- setup/common.sh@19 -- # local var val 00:03:28.168 05:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.168 05:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.168 05:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.168 05:01:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.168 05:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.168 05:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78041596 kB' 'MemAvailable: 81680300 kB' 'Buffers: 9380 kB' 'Cached: 9595832 kB' 'SwapCached: 0 kB' 'Active: 6435996 kB' 'Inactive: 3763032 kB' 'Active(anon): 6062972 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597068 kB' 'Mapped: 142772 kB' 'Shmem: 5469156 kB' 'KReclaimable: 197652 kB' 'Slab: 715972 kB' 'SReclaimable: 197652 kB' 'SUnreclaim: 518320 kB' 'KernelStack: 21216 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8435416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219612 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.168 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.168 05:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.169 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.169 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.169 05:01:24 -- setup/common.sh@33 -- # echo 1024 00:03:28.169 05:01:24 -- setup/common.sh@33 -- # return 0 00:03:28.169 05:01:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.169 05:01:24 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.169 05:01:24 -- setup/hugepages.sh@27 -- # local node 00:03:28.169 05:01:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.169 05:01:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.169 05:01:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.169 05:01:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.169 05:01:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.169 05:01:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.169 05:01:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.169 05:01:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.169 05:01:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.169 05:01:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.169 05:01:24 -- setup/common.sh@18 -- # local node=0 00:03:28.169 05:01:24 -- setup/common.sh@19 -- # local var val 00:03:28.169 05:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.169 05:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.169 05:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.169 05:01:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.170 05:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.170 05:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 27917184 kB' 'MemUsed: 4713412 kB' 'SwapCached: 0 kB' 'Active: 1600028 kB' 'Inactive: 176940 kB' 'Active(anon): 1408844 kB' 'Inactive(anon): 0 kB' 'Active(file): 191184 kB' 'Inactive(file): 176940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1624192 kB' 'Mapped: 31636 kB' 'AnonPages: 155900 kB' 'Shmem: 1256068 kB' 'KernelStack: 9672 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98284 kB' 'Slab: 373876 kB' 'SReclaimable: 98284 kB' 'SUnreclaim: 275592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # continue 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.170 05:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.170 05:01:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.170 05:01:24 -- setup/common.sh@33 -- # echo 0 00:03:28.171 05:01:24 -- setup/common.sh@33 -- # return 0 00:03:28.171 05:01:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.171 05:01:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.171 05:01:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.171 05:01:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.171 05:01:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.171 node0=1024 expecting 1024 00:03:28.171 05:01:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.171 00:03:28.171 real 0m4.271s 00:03:28.171 user 0m1.435s 00:03:28.171 sys 0m2.124s 00:03:28.171 05:01:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:28.171 05:01:24 -- common/autotest_common.sh@10 -- # set +x 00:03:28.171 ************************************ 00:03:28.171 END TEST default_setup 00:03:28.171 ************************************ 00:03:28.430 05:01:24 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:28.430 05:01:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.430 05:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.430 05:01:24 -- common/autotest_common.sh@10 -- # set +x 00:03:28.430 ************************************ 00:03:28.430 START TEST per_node_1G_alloc 00:03:28.430 ************************************ 00:03:28.430 05:01:25 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:28.430 05:01:25 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:28.430 05:01:25 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:28.430 05:01:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:28.430 05:01:25 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:28.430 05:01:25 -- setup/hugepages.sh@51 -- # shift 00:03:28.430 05:01:25 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:28.430 05:01:25 -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.430 05:01:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.430 05:01:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:28.430 05:01:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:28.430 05:01:25 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:28.430 05:01:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.430 05:01:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:28.430 05:01:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.430 05:01:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.430 05:01:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.430 05:01:25 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:28.430 05:01:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.430 05:01:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:28.430 05:01:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.430 05:01:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:28.430 05:01:25 -- setup/hugepages.sh@73 -- # return 0 00:03:28.430 05:01:25 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:28.430 05:01:25 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:28.430 05:01:25 -- setup/hugepages.sh@146 -- # setup output 00:03:28.430 05:01:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.430 05:01:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:30.972 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:31.232 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.232 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.232 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.496 05:01:28 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:31.496 05:01:28 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:31.496 05:01:28 -- setup/hugepages.sh@89 -- # local node 00:03:31.496 05:01:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.496 05:01:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.496 05:01:28 -- setup/hugepages.sh@92 -- # local surp 00:03:31.496 05:01:28 -- setup/hugepages.sh@93 -- # local resv 00:03:31.496 05:01:28 -- setup/hugepages.sh@94 -- # local anon 00:03:31.496 05:01:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.496 05:01:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.496 05:01:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.496 05:01:28 -- setup/common.sh@18 -- # local node= 00:03:31.496 05:01:28 -- setup/common.sh@19 -- # local var val 00:03:31.496 05:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.496 05:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.496 05:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.496 05:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.496 05:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.496 05:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78044516 kB' 'MemAvailable: 81683220 kB' 'Buffers: 9380 kB' 'Cached: 9595920 kB' 'SwapCached: 0 kB' 'Active: 6437268 kB' 'Inactive: 3763032 kB' 'Active(anon): 6064244 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597712 kB' 'Mapped: 142924 kB' 'Shmem: 5469244 kB' 'KReclaimable: 197652 kB' 'Slab: 716008 kB' 'SReclaimable: 197652 kB' 'SUnreclaim: 518356 kB' 'KernelStack: 21024 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8431840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219548 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.496 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.496 05:01:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.497 05:01:28 -- setup/common.sh@33 -- # echo 0 00:03:31.497 05:01:28 -- setup/common.sh@33 -- # return 0 00:03:31.497 05:01:28 -- setup/hugepages.sh@97 -- # anon=0 00:03:31.497 05:01:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.497 05:01:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.497 05:01:28 -- setup/common.sh@18 -- # local node= 00:03:31.497 05:01:28 -- setup/common.sh@19 -- # local var val 00:03:31.497 05:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.497 05:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.497 05:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.497 05:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.497 05:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.497 05:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78046432 kB' 'MemAvailable: 81685136 kB' 'Buffers: 9380 kB' 'Cached: 9595920 kB' 'SwapCached: 0 kB' 'Active: 6436488 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063464 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597440 kB' 'Mapped: 142764 kB' 'Shmem: 5469244 kB' 'KReclaimable: 197652 kB' 'Slab: 715908 kB' 'SReclaimable: 197652 kB' 'SUnreclaim: 518256 kB' 'KernelStack: 21008 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8431852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219516 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.497 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.497 05:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.498 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.498 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.499 05:01:28 -- setup/common.sh@33 -- # echo 0 00:03:31.499 05:01:28 -- setup/common.sh@33 -- # return 0 00:03:31.499 05:01:28 -- setup/hugepages.sh@99 -- # surp=0 00:03:31.499 05:01:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.499 05:01:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.499 05:01:28 -- setup/common.sh@18 -- # local node= 00:03:31.499 05:01:28 -- setup/common.sh@19 -- # local var val 00:03:31.499 05:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.499 05:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.499 05:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.499 05:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.499 05:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.499 05:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78046732 kB' 'MemAvailable: 81685436 kB' 'Buffers: 9380 kB' 'Cached: 9595932 kB' 'SwapCached: 0 kB' 'Active: 6436356 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063332 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597256 kB' 'Mapped: 142764 kB' 'Shmem: 5469256 kB' 'KReclaimable: 197652 kB' 'Slab: 715908 kB' 'SReclaimable: 197652 kB' 'SUnreclaim: 518256 kB' 'KernelStack: 20992 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8431864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219516 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.499 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.499 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.500 05:01:28 -- setup/common.sh@33 -- # echo 0 00:03:31.500 05:01:28 -- setup/common.sh@33 -- # return 0 00:03:31.500 05:01:28 -- setup/hugepages.sh@100 -- # resv=0 00:03:31.500 05:01:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.500 nr_hugepages=1024 00:03:31.500 05:01:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.500 resv_hugepages=0 00:03:31.500 05:01:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.500 surplus_hugepages=0 00:03:31.500 05:01:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.500 anon_hugepages=0 00:03:31.500 05:01:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.500 05:01:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.500 05:01:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.500 05:01:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.500 05:01:28 -- setup/common.sh@18 -- # local node= 00:03:31.500 05:01:28 -- setup/common.sh@19 -- # local var val 00:03:31.500 05:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.500 05:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.500 05:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.500 05:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.500 05:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.500 05:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78046672 kB' 'MemAvailable: 81685376 kB' 'Buffers: 9380 kB' 'Cached: 9595948 kB' 'SwapCached: 0 kB' 'Active: 6436520 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063496 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597436 kB' 'Mapped: 142764 kB' 'Shmem: 5469272 kB' 'KReclaimable: 197652 kB' 'Slab: 715908 kB' 'SReclaimable: 197652 kB' 'SUnreclaim: 518256 kB' 'KernelStack: 21008 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8431880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219516 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.500 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.500 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.501 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.501 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.502 05:01:28 -- setup/common.sh@33 -- # echo 1024 00:03:31.502 05:01:28 -- setup/common.sh@33 -- # return 0 00:03:31.502 05:01:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.502 05:01:28 -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.502 05:01:28 -- setup/hugepages.sh@27 -- # local node 00:03:31.502 05:01:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.502 05:01:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.502 05:01:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.502 05:01:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.502 05:01:28 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.502 05:01:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.502 05:01:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.502 05:01:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.502 05:01:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.502 05:01:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.502 05:01:28 -- setup/common.sh@18 -- # local node=0 00:03:31.502 05:01:28 -- setup/common.sh@19 -- # local var val 00:03:31.502 05:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.502 05:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.502 05:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.502 05:01:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.502 05:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.502 05:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 28965828 kB' 'MemUsed: 3664768 kB' 'SwapCached: 0 kB' 'Active: 1600136 kB' 'Inactive: 176940 kB' 'Active(anon): 1408952 kB' 'Inactive(anon): 0 kB' 'Active(file): 191184 kB' 'Inactive(file): 176940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1624232 kB' 'Mapped: 31636 kB' 'AnonPages: 155964 kB' 'Shmem: 1256108 kB' 'KernelStack: 9464 kB' 'PageTables: 3072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98284 kB' 'Slab: 373816 kB' 'SReclaimable: 98284 kB' 'SUnreclaim: 275532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.502 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.502 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@33 -- # echo 0 00:03:31.503 05:01:28 -- setup/common.sh@33 -- # return 0 00:03:31.503 05:01:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.503 05:01:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.503 05:01:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.503 05:01:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:31.503 05:01:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.503 05:01:28 -- setup/common.sh@18 -- # local node=1 00:03:31.503 05:01:28 -- setup/common.sh@19 -- # local var val 00:03:31.503 05:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.503 05:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.503 05:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:31.503 05:01:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:31.503 05:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.503 05:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682296 kB' 'MemFree: 49081848 kB' 'MemUsed: 11600448 kB' 'SwapCached: 0 kB' 'Active: 4836292 kB' 'Inactive: 3586092 kB' 'Active(anon): 4654452 kB' 'Inactive(anon): 0 kB' 'Active(file): 181840 kB' 'Inactive(file): 3586092 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981120 kB' 'Mapped: 111128 kB' 'AnonPages: 441324 kB' 'Shmem: 4213188 kB' 'KernelStack: 11528 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99368 kB' 'Slab: 342092 kB' 'SReclaimable: 99368 kB' 'SUnreclaim: 242724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.503 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.503 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # continue 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.504 05:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.504 05:01:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.504 05:01:28 -- setup/common.sh@33 -- # echo 0 00:03:31.504 05:01:28 -- setup/common.sh@33 -- # return 0 00:03:31.504 05:01:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.504 05:01:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.504 05:01:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.504 05:01:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.504 05:01:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:31.504 node0=512 expecting 512 00:03:31.504 05:01:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.504 05:01:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.504 05:01:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.504 05:01:28 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:31.504 node1=512 expecting 512 00:03:31.504 05:01:28 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:31.504 00:03:31.504 real 0m3.277s 00:03:31.504 user 0m1.355s 00:03:31.504 sys 0m1.988s 00:03:31.504 05:01:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:31.504 05:01:28 -- common/autotest_common.sh@10 -- # set +x 00:03:31.504 ************************************ 00:03:31.504 END TEST per_node_1G_alloc 00:03:31.504 ************************************ 00:03:31.504 05:01:28 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:31.504 05:01:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:31.504 05:01:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:31.504 05:01:28 -- common/autotest_common.sh@10 -- # set +x 00:03:31.504 ************************************ 00:03:31.504 START TEST even_2G_alloc 00:03:31.504 ************************************ 00:03:31.764 05:01:28 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:31.764 05:01:28 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:31.764 05:01:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.764 05:01:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:31.764 05:01:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.764 05:01:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.764 05:01:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:31.764 05:01:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.764 05:01:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.764 05:01:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.764 05:01:28 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.764 05:01:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.764 05:01:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.764 05:01:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.764 05:01:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:31.764 05:01:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.764 05:01:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:31.764 05:01:28 -- setup/hugepages.sh@83 -- # : 512 00:03:31.764 05:01:28 -- setup/hugepages.sh@84 -- # : 1 00:03:31.764 05:01:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.764 05:01:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:31.764 05:01:28 -- setup/hugepages.sh@83 -- # : 0 00:03:31.764 05:01:28 -- setup/hugepages.sh@84 -- # : 0 00:03:31.764 05:01:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.764 05:01:28 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:31.764 05:01:28 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:31.764 05:01:28 -- setup/hugepages.sh@153 -- # setup output 00:03:31.764 05:01:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.764 05:01:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:34.305 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:34.565 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.565 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.565 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.833 05:01:31 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:34.833 05:01:31 -- setup/hugepages.sh@89 -- # local node 00:03:34.833 05:01:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.833 05:01:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.833 05:01:31 -- setup/hugepages.sh@92 -- # local surp 00:03:34.833 05:01:31 -- setup/hugepages.sh@93 -- # local resv 00:03:34.833 05:01:31 -- setup/hugepages.sh@94 -- # local anon 00:03:34.833 05:01:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.833 05:01:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.833 05:01:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.833 05:01:31 -- setup/common.sh@18 -- # local node= 00:03:34.833 05:01:31 -- setup/common.sh@19 -- # local var val 00:03:34.833 05:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.833 05:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.833 05:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.833 05:01:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.833 05:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.833 05:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78077456 kB' 'MemAvailable: 81716096 kB' 'Buffers: 9380 kB' 'Cached: 9596036 kB' 'SwapCached: 0 kB' 'Active: 6436856 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063832 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597804 kB' 'Mapped: 141804 kB' 'Shmem: 5469360 kB' 'KReclaimable: 197524 kB' 'Slab: 715800 kB' 'SReclaimable: 197524 kB' 'SUnreclaim: 518276 kB' 'KernelStack: 20976 kB' 'PageTables: 7352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8425244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219516 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.833 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.833 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.834 05:01:31 -- setup/common.sh@33 -- # echo 0 00:03:34.834 05:01:31 -- setup/common.sh@33 -- # return 0 00:03:34.834 05:01:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.834 05:01:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.834 05:01:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.834 05:01:31 -- setup/common.sh@18 -- # local node= 00:03:34.834 05:01:31 -- setup/common.sh@19 -- # local var val 00:03:34.834 05:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.834 05:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.834 05:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.834 05:01:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.834 05:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.834 05:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78076952 kB' 'MemAvailable: 81715560 kB' 'Buffers: 9380 kB' 'Cached: 9596036 kB' 'SwapCached: 0 kB' 'Active: 6436516 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063492 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597488 kB' 'Mapped: 141744 kB' 'Shmem: 5469360 kB' 'KReclaimable: 197460 kB' 'Slab: 715736 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518276 kB' 'KernelStack: 20960 kB' 'PageTables: 7292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8425256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219532 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.834 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.834 05:01:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.835 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.835 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.836 05:01:31 -- setup/common.sh@33 -- # echo 0 00:03:34.836 05:01:31 -- setup/common.sh@33 -- # return 0 00:03:34.836 05:01:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.836 05:01:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.836 05:01:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.836 05:01:31 -- setup/common.sh@18 -- # local node= 00:03:34.836 05:01:31 -- setup/common.sh@19 -- # local var val 00:03:34.836 05:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.836 05:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.836 05:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.836 05:01:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.836 05:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.836 05:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78077448 kB' 'MemAvailable: 81716056 kB' 'Buffers: 9380 kB' 'Cached: 9596036 kB' 'SwapCached: 0 kB' 'Active: 6436488 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063464 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597460 kB' 'Mapped: 141744 kB' 'Shmem: 5469360 kB' 'KReclaimable: 197460 kB' 'Slab: 715792 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518332 kB' 'KernelStack: 20960 kB' 'PageTables: 7304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8425272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219532 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.836 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.836 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.837 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.837 05:01:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.838 05:01:31 -- setup/common.sh@33 -- # echo 0 00:03:34.838 05:01:31 -- setup/common.sh@33 -- # return 0 00:03:34.838 05:01:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.838 05:01:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.838 nr_hugepages=1024 00:03:34.838 05:01:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.838 resv_hugepages=0 00:03:34.838 05:01:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.838 surplus_hugepages=0 00:03:34.838 05:01:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.838 anon_hugepages=0 00:03:34.838 05:01:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.838 05:01:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.838 05:01:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.838 05:01:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.838 05:01:31 -- setup/common.sh@18 -- # local node= 00:03:34.838 05:01:31 -- setup/common.sh@19 -- # local var val 00:03:34.838 05:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.838 05:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.838 05:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.838 05:01:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.838 05:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.838 05:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78077552 kB' 'MemAvailable: 81716160 kB' 'Buffers: 9380 kB' 'Cached: 9596036 kB' 'SwapCached: 0 kB' 'Active: 6436628 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063604 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597600 kB' 'Mapped: 141744 kB' 'Shmem: 5469360 kB' 'KReclaimable: 197460 kB' 'Slab: 715788 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518328 kB' 'KernelStack: 20944 kB' 'PageTables: 7256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8425284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219532 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.838 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.838 05:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.839 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.839 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.840 05:01:31 -- setup/common.sh@33 -- # echo 1024 00:03:34.840 05:01:31 -- setup/common.sh@33 -- # return 0 00:03:34.840 05:01:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.840 05:01:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.840 05:01:31 -- setup/hugepages.sh@27 -- # local node 00:03:34.840 05:01:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.840 05:01:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.840 05:01:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.840 05:01:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.840 05:01:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.840 05:01:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.840 05:01:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.840 05:01:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.840 05:01:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.840 05:01:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.840 05:01:31 -- setup/common.sh@18 -- # local node=0 00:03:34.840 05:01:31 -- setup/common.sh@19 -- # local var val 00:03:34.840 05:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.840 05:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.840 05:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.840 05:01:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.840 05:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.840 05:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.840 05:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 28979388 kB' 'MemUsed: 3651208 kB' 'SwapCached: 0 kB' 'Active: 1604732 kB' 'Inactive: 176940 kB' 'Active(anon): 1413548 kB' 'Inactive(anon): 0 kB' 'Active(file): 191184 kB' 'Inactive(file): 176940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1624300 kB' 'Mapped: 30840 kB' 'AnonPages: 160548 kB' 'Shmem: 1256176 kB' 'KernelStack: 9448 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98124 kB' 'Slab: 373680 kB' 'SReclaimable: 98124 kB' 'SUnreclaim: 275556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.840 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.840 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@33 -- # echo 0 00:03:34.841 05:01:31 -- setup/common.sh@33 -- # return 0 00:03:34.841 05:01:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.841 05:01:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.841 05:01:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.841 05:01:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.841 05:01:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.841 05:01:31 -- setup/common.sh@18 -- # local node=1 00:03:34.841 05:01:31 -- setup/common.sh@19 -- # local var val 00:03:34.841 05:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.841 05:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.841 05:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.841 05:01:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.841 05:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.841 05:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682296 kB' 'MemFree: 49097408 kB' 'MemUsed: 11584888 kB' 'SwapCached: 0 kB' 'Active: 4831804 kB' 'Inactive: 3586092 kB' 'Active(anon): 4649964 kB' 'Inactive(anon): 0 kB' 'Active(file): 181840 kB' 'Inactive(file): 3586092 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981160 kB' 'Mapped: 110904 kB' 'AnonPages: 436912 kB' 'Shmem: 4213228 kB' 'KernelStack: 11512 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99336 kB' 'Slab: 342124 kB' 'SReclaimable: 99336 kB' 'SUnreclaim: 242788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.841 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.841 05:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # continue 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.842 05:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.842 05:01:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.842 05:01:31 -- setup/common.sh@33 -- # echo 0 00:03:34.842 05:01:31 -- setup/common.sh@33 -- # return 0 00:03:34.842 05:01:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.842 05:01:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.842 05:01:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.842 05:01:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.842 05:01:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.842 node0=512 expecting 512 00:03:34.842 05:01:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.843 05:01:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.843 05:01:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.843 05:01:31 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:34.843 node1=512 expecting 512 00:03:34.843 05:01:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.843 00:03:34.843 real 0m3.271s 00:03:34.843 user 0m1.347s 00:03:34.843 sys 0m1.990s 00:03:34.843 05:01:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.843 05:01:31 -- common/autotest_common.sh@10 -- # set +x 00:03:34.843 ************************************ 00:03:34.843 END TEST even_2G_alloc 00:03:34.843 ************************************ 00:03:34.843 05:01:31 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:34.843 05:01:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.843 05:01:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.843 05:01:31 -- common/autotest_common.sh@10 -- # set +x 00:03:34.843 ************************************ 00:03:34.843 START TEST odd_alloc 00:03:34.843 ************************************ 00:03:34.843 05:01:31 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:34.843 05:01:31 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:34.843 05:01:31 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:34.843 05:01:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.843 05:01:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.843 05:01:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:34.843 05:01:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.843 05:01:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.843 05:01:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.843 05:01:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:34.843 05:01:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.843 05:01:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.843 05:01:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.843 05:01:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.843 05:01:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.843 05:01:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.843 05:01:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.843 05:01:31 -- setup/hugepages.sh@83 -- # : 513 00:03:34.843 05:01:31 -- setup/hugepages.sh@84 -- # : 1 00:03:34.843 05:01:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.843 05:01:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:34.843 05:01:31 -- setup/hugepages.sh@83 -- # : 0 00:03:34.843 05:01:31 -- setup/hugepages.sh@84 -- # : 0 00:03:34.843 05:01:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.843 05:01:31 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:34.843 05:01:31 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:34.843 05:01:31 -- setup/hugepages.sh@160 -- # setup output 00:03:34.843 05:01:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.843 05:01:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:38.148 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:38.148 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.148 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.148 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.148 05:01:34 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:38.148 05:01:34 -- setup/hugepages.sh@89 -- # local node 00:03:38.148 05:01:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.148 05:01:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.148 05:01:34 -- setup/hugepages.sh@92 -- # local surp 00:03:38.148 05:01:34 -- setup/hugepages.sh@93 -- # local resv 00:03:38.148 05:01:34 -- setup/hugepages.sh@94 -- # local anon 00:03:38.148 05:01:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.148 05:01:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.148 05:01:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.148 05:01:34 -- setup/common.sh@18 -- # local node= 00:03:38.148 05:01:34 -- setup/common.sh@19 -- # local var val 00:03:38.148 05:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.148 05:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.148 05:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.148 05:01:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.148 05:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.148 05:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78051656 kB' 'MemAvailable: 81690264 kB' 'Buffers: 9380 kB' 'Cached: 9596164 kB' 'SwapCached: 0 kB' 'Active: 6438320 kB' 'Inactive: 3763032 kB' 'Active(anon): 6065296 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598976 kB' 'Mapped: 141784 kB' 'Shmem: 5469488 kB' 'KReclaimable: 197460 kB' 'Slab: 715896 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518436 kB' 'KernelStack: 20976 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53995448 kB' 'Committed_AS: 8425532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219660 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.148 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.148 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.149 05:01:34 -- setup/common.sh@33 -- # echo 0 00:03:38.149 05:01:34 -- setup/common.sh@33 -- # return 0 00:03:38.149 05:01:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:38.149 05:01:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.149 05:01:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.149 05:01:34 -- setup/common.sh@18 -- # local node= 00:03:38.149 05:01:34 -- setup/common.sh@19 -- # local var val 00:03:38.149 05:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.149 05:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.149 05:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.149 05:01:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.149 05:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.149 05:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78052996 kB' 'MemAvailable: 81691604 kB' 'Buffers: 9380 kB' 'Cached: 9596168 kB' 'SwapCached: 0 kB' 'Active: 6437232 kB' 'Inactive: 3763032 kB' 'Active(anon): 6064208 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597992 kB' 'Mapped: 141752 kB' 'Shmem: 5469492 kB' 'KReclaimable: 197460 kB' 'Slab: 716176 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518716 kB' 'KernelStack: 20960 kB' 'PageTables: 7304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53995448 kB' 'Committed_AS: 8425776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219532 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.149 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.149 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.150 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.150 05:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.151 05:01:34 -- setup/common.sh@33 -- # echo 0 00:03:38.151 05:01:34 -- setup/common.sh@33 -- # return 0 00:03:38.151 05:01:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.151 05:01:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.151 05:01:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.151 05:01:34 -- setup/common.sh@18 -- # local node= 00:03:38.151 05:01:34 -- setup/common.sh@19 -- # local var val 00:03:38.151 05:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.151 05:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.151 05:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.151 05:01:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.151 05:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.151 05:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78053624 kB' 'MemAvailable: 81692232 kB' 'Buffers: 9380 kB' 'Cached: 9596180 kB' 'SwapCached: 0 kB' 'Active: 6437240 kB' 'Inactive: 3763032 kB' 'Active(anon): 6064216 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597988 kB' 'Mapped: 141752 kB' 'Shmem: 5469504 kB' 'KReclaimable: 197460 kB' 'Slab: 716176 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518716 kB' 'KernelStack: 20960 kB' 'PageTables: 7304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53995448 kB' 'Committed_AS: 8425792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219532 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.151 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.152 05:01:34 -- setup/common.sh@33 -- # echo 0 00:03:38.152 05:01:34 -- setup/common.sh@33 -- # return 0 00:03:38.152 05:01:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.152 05:01:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:38.152 nr_hugepages=1025 00:03:38.152 05:01:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.152 resv_hugepages=0 00:03:38.152 05:01:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.152 surplus_hugepages=0 00:03:38.152 05:01:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.152 anon_hugepages=0 00:03:38.152 05:01:34 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:38.152 05:01:34 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:38.152 05:01:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.152 05:01:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.152 05:01:34 -- setup/common.sh@18 -- # local node= 00:03:38.152 05:01:34 -- setup/common.sh@19 -- # local var val 00:03:38.152 05:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.152 05:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.152 05:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.152 05:01:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.152 05:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.152 05:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78053624 kB' 'MemAvailable: 81692232 kB' 'Buffers: 9380 kB' 'Cached: 9596204 kB' 'SwapCached: 0 kB' 'Active: 6436920 kB' 'Inactive: 3763032 kB' 'Active(anon): 6063896 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597596 kB' 'Mapped: 141752 kB' 'Shmem: 5469528 kB' 'KReclaimable: 197460 kB' 'Slab: 716176 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518716 kB' 'KernelStack: 20944 kB' 'PageTables: 7256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53995448 kB' 'Committed_AS: 8425804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219532 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.152 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.152 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.153 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.154 05:01:34 -- setup/common.sh@33 -- # echo 1025 00:03:38.154 05:01:34 -- setup/common.sh@33 -- # return 0 00:03:38.154 05:01:34 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:38.154 05:01:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.154 05:01:34 -- setup/hugepages.sh@27 -- # local node 00:03:38.154 05:01:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.154 05:01:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.154 05:01:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.154 05:01:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:38.154 05:01:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.154 05:01:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.154 05:01:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.154 05:01:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.154 05:01:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.154 05:01:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.154 05:01:34 -- setup/common.sh@18 -- # local node=0 00:03:38.154 05:01:34 -- setup/common.sh@19 -- # local var val 00:03:38.154 05:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.154 05:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.154 05:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.154 05:01:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.154 05:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.154 05:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 28965728 kB' 'MemUsed: 3664868 kB' 'SwapCached: 0 kB' 'Active: 1605792 kB' 'Inactive: 176940 kB' 'Active(anon): 1414608 kB' 'Inactive(anon): 0 kB' 'Active(file): 191184 kB' 'Inactive(file): 176940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1624376 kB' 'Mapped: 30840 kB' 'AnonPages: 161460 kB' 'Shmem: 1256252 kB' 'KernelStack: 9448 kB' 'PageTables: 3024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98124 kB' 'Slab: 373912 kB' 'SReclaimable: 98124 kB' 'SUnreclaim: 275788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@33 -- # echo 0 00:03:38.155 05:01:34 -- setup/common.sh@33 -- # return 0 00:03:38.155 05:01:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.155 05:01:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.155 05:01:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.155 05:01:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.155 05:01:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.155 05:01:34 -- setup/common.sh@18 -- # local node=1 00:03:38.155 05:01:34 -- setup/common.sh@19 -- # local var val 00:03:38.155 05:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.155 05:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.155 05:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.155 05:01:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.155 05:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.155 05:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682296 kB' 'MemFree: 49087392 kB' 'MemUsed: 11594904 kB' 'SwapCached: 0 kB' 'Active: 4832040 kB' 'Inactive: 3586092 kB' 'Active(anon): 4650200 kB' 'Inactive(anon): 0 kB' 'Active(file): 181840 kB' 'Inactive(file): 3586092 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981224 kB' 'Mapped: 110912 kB' 'AnonPages: 437084 kB' 'Shmem: 4213292 kB' 'KernelStack: 11512 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99336 kB' 'Slab: 342264 kB' 'SReclaimable: 99336 kB' 'SUnreclaim: 242928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # continue 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 05:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 05:01:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.156 05:01:34 -- setup/common.sh@33 -- # echo 0 00:03:38.156 05:01:34 -- setup/common.sh@33 -- # return 0 00:03:38.156 05:01:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.156 05:01:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.156 05:01:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.156 05:01:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.156 05:01:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:38.156 node0=512 expecting 513 00:03:38.156 05:01:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.156 05:01:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.156 05:01:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.156 05:01:34 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:38.156 node1=513 expecting 512 00:03:38.156 05:01:34 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:38.156 00:03:38.156 real 0m3.309s 00:03:38.156 user 0m1.379s 00:03:38.156 sys 0m1.999s 00:03:38.156 05:01:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:38.156 05:01:34 -- common/autotest_common.sh@10 -- # set +x 00:03:38.156 ************************************ 00:03:38.156 END TEST odd_alloc 00:03:38.156 ************************************ 00:03:38.417 05:01:34 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:38.417 05:01:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:38.417 05:01:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:38.417 05:01:34 -- common/autotest_common.sh@10 -- # set +x 00:03:38.417 ************************************ 00:03:38.417 START TEST custom_alloc 00:03:38.417 ************************************ 00:03:38.417 05:01:34 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:38.417 05:01:34 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:38.417 05:01:34 -- setup/hugepages.sh@169 -- # local node 00:03:38.417 05:01:34 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:38.417 05:01:34 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:38.417 05:01:34 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:38.417 05:01:34 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:38.417 05:01:34 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:38.417 05:01:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:38.417 05:01:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.417 05:01:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.417 05:01:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.417 05:01:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:38.417 05:01:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.417 05:01:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.417 05:01:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.417 05:01:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:38.417 05:01:34 -- setup/hugepages.sh@83 -- # : 256 00:03:38.417 05:01:34 -- setup/hugepages.sh@84 -- # : 1 00:03:38.417 05:01:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:38.417 05:01:34 -- setup/hugepages.sh@83 -- # : 0 00:03:38.417 05:01:34 -- setup/hugepages.sh@84 -- # : 0 00:03:38.417 05:01:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:38.417 05:01:34 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:38.417 05:01:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.417 05:01:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.417 05:01:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.417 05:01:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.417 05:01:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.417 05:01:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.417 05:01:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.417 05:01:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.417 05:01:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.417 05:01:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.417 05:01:34 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:38.417 05:01:34 -- setup/hugepages.sh@78 -- # return 0 00:03:38.417 05:01:34 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:38.417 05:01:34 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:38.417 05:01:34 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:38.417 05:01:34 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:38.417 05:01:34 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:38.417 05:01:34 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:38.417 05:01:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.417 05:01:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.417 05:01:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.417 05:01:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.417 05:01:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.417 05:01:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.417 05:01:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:38.417 05:01:34 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.417 05:01:34 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:38.417 05:01:34 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.417 05:01:34 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:38.417 05:01:34 -- setup/hugepages.sh@78 -- # return 0 00:03:38.417 05:01:34 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:38.417 05:01:34 -- setup/hugepages.sh@187 -- # setup output 00:03:38.417 05:01:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.417 05:01:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:40.957 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:41.217 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.217 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.217 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.480 05:01:38 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:41.480 05:01:38 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:41.480 05:01:38 -- setup/hugepages.sh@89 -- # local node 00:03:41.480 05:01:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.480 05:01:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.480 05:01:38 -- setup/hugepages.sh@92 -- # local surp 00:03:41.480 05:01:38 -- setup/hugepages.sh@93 -- # local resv 00:03:41.480 05:01:38 -- setup/hugepages.sh@94 -- # local anon 00:03:41.481 05:01:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.481 05:01:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.481 05:01:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.481 05:01:38 -- setup/common.sh@18 -- # local node= 00:03:41.481 05:01:38 -- setup/common.sh@19 -- # local var val 00:03:41.481 05:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.481 05:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.481 05:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.481 05:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.481 05:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.481 05:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 76999432 kB' 'MemAvailable: 80638040 kB' 'Buffers: 9380 kB' 'Cached: 9596288 kB' 'SwapCached: 0 kB' 'Active: 6438884 kB' 'Inactive: 3763032 kB' 'Active(anon): 6065860 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599408 kB' 'Mapped: 141864 kB' 'Shmem: 5469612 kB' 'KReclaimable: 197460 kB' 'Slab: 716100 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518640 kB' 'KernelStack: 20976 kB' 'PageTables: 7372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53472184 kB' 'Committed_AS: 8426536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219548 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.481 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.481 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.482 05:01:38 -- setup/common.sh@33 -- # echo 0 00:03:41.482 05:01:38 -- setup/common.sh@33 -- # return 0 00:03:41.482 05:01:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.482 05:01:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.482 05:01:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.482 05:01:38 -- setup/common.sh@18 -- # local node= 00:03:41.482 05:01:38 -- setup/common.sh@19 -- # local var val 00:03:41.482 05:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.482 05:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.482 05:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.482 05:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.482 05:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.482 05:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 76999868 kB' 'MemAvailable: 80638476 kB' 'Buffers: 9380 kB' 'Cached: 9596292 kB' 'SwapCached: 0 kB' 'Active: 6438564 kB' 'Inactive: 3763032 kB' 'Active(anon): 6065540 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599116 kB' 'Mapped: 141756 kB' 'Shmem: 5469616 kB' 'KReclaimable: 197460 kB' 'Slab: 716076 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518616 kB' 'KernelStack: 20960 kB' 'PageTables: 7304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53472184 kB' 'Committed_AS: 8426548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219532 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.482 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.482 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.483 05:01:38 -- setup/common.sh@33 -- # echo 0 00:03:41.483 05:01:38 -- setup/common.sh@33 -- # return 0 00:03:41.483 05:01:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.483 05:01:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.483 05:01:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.483 05:01:38 -- setup/common.sh@18 -- # local node= 00:03:41.483 05:01:38 -- setup/common.sh@19 -- # local var val 00:03:41.483 05:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.483 05:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.483 05:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.483 05:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.483 05:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.483 05:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 77000476 kB' 'MemAvailable: 80639084 kB' 'Buffers: 9380 kB' 'Cached: 9596304 kB' 'SwapCached: 0 kB' 'Active: 6439948 kB' 'Inactive: 3763032 kB' 'Active(anon): 6066924 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600472 kB' 'Mapped: 142260 kB' 'Shmem: 5469628 kB' 'KReclaimable: 197460 kB' 'Slab: 716076 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518616 kB' 'KernelStack: 20928 kB' 'PageTables: 7208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53472184 kB' 'Committed_AS: 8428976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219500 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.483 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.483 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.484 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.484 05:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.485 05:01:38 -- setup/common.sh@33 -- # echo 0 00:03:41.485 05:01:38 -- setup/common.sh@33 -- # return 0 00:03:41.485 05:01:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.485 05:01:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:41.485 nr_hugepages=1536 00:03:41.485 05:01:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.485 resv_hugepages=0 00:03:41.485 05:01:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.485 surplus_hugepages=0 00:03:41.485 05:01:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.485 anon_hugepages=0 00:03:41.485 05:01:38 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:41.485 05:01:38 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:41.485 05:01:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.485 05:01:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.485 05:01:38 -- setup/common.sh@18 -- # local node= 00:03:41.485 05:01:38 -- setup/common.sh@19 -- # local var val 00:03:41.485 05:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.485 05:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.485 05:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.485 05:01:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.485 05:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.485 05:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.485 05:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 76999980 kB' 'MemAvailable: 80638588 kB' 'Buffers: 9380 kB' 'Cached: 9596328 kB' 'SwapCached: 0 kB' 'Active: 6443856 kB' 'Inactive: 3763032 kB' 'Active(anon): 6070832 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604368 kB' 'Mapped: 142672 kB' 'Shmem: 5469652 kB' 'KReclaimable: 197460 kB' 'Slab: 716076 kB' 'SReclaimable: 197460 kB' 'SUnreclaim: 518616 kB' 'KernelStack: 20944 kB' 'PageTables: 7280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53472184 kB' 'Committed_AS: 8432696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219504 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.485 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.485 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.486 05:01:38 -- setup/common.sh@33 -- # echo 1536 00:03:41.486 05:01:38 -- setup/common.sh@33 -- # return 0 00:03:41.486 05:01:38 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:41.486 05:01:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.486 05:01:38 -- setup/hugepages.sh@27 -- # local node 00:03:41.486 05:01:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.486 05:01:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.486 05:01:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.486 05:01:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.486 05:01:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.486 05:01:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.486 05:01:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.486 05:01:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.486 05:01:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.486 05:01:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.486 05:01:38 -- setup/common.sh@18 -- # local node=0 00:03:41.486 05:01:38 -- setup/common.sh@19 -- # local var val 00:03:41.486 05:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.486 05:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.486 05:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.486 05:01:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.486 05:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.486 05:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 28961416 kB' 'MemUsed: 3669180 kB' 'SwapCached: 0 kB' 'Active: 1606344 kB' 'Inactive: 176940 kB' 'Active(anon): 1415160 kB' 'Inactive(anon): 0 kB' 'Active(file): 191184 kB' 'Inactive(file): 176940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1624456 kB' 'Mapped: 30840 kB' 'AnonPages: 161948 kB' 'Shmem: 1256332 kB' 'KernelStack: 9432 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98124 kB' 'Slab: 373884 kB' 'SReclaimable: 98124 kB' 'SUnreclaim: 275760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.486 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.486 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.487 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.487 05:01:38 -- setup/common.sh@33 -- # echo 0 00:03:41.487 05:01:38 -- setup/common.sh@33 -- # return 0 00:03:41.487 05:01:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.487 05:01:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.487 05:01:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.487 05:01:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:41.487 05:01:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.487 05:01:38 -- setup/common.sh@18 -- # local node=1 00:03:41.487 05:01:38 -- setup/common.sh@19 -- # local var val 00:03:41.487 05:01:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.487 05:01:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.487 05:01:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:41.487 05:01:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:41.487 05:01:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.487 05:01:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.487 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682296 kB' 'MemFree: 48044852 kB' 'MemUsed: 12637444 kB' 'SwapCached: 0 kB' 'Active: 4832188 kB' 'Inactive: 3586092 kB' 'Active(anon): 4650348 kB' 'Inactive(anon): 0 kB' 'Active(file): 181840 kB' 'Inactive(file): 3586092 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981256 kB' 'Mapped: 110916 kB' 'AnonPages: 437100 kB' 'Shmem: 4213324 kB' 'KernelStack: 11512 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99336 kB' 'Slab: 342192 kB' 'SReclaimable: 99336 kB' 'SUnreclaim: 242856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # continue 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.488 05:01:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.488 05:01:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.488 05:01:38 -- setup/common.sh@33 -- # echo 0 00:03:41.488 05:01:38 -- setup/common.sh@33 -- # return 0 00:03:41.488 05:01:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.488 05:01:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.488 05:01:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.488 05:01:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.488 05:01:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.488 node0=512 expecting 512 00:03:41.488 05:01:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.488 05:01:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.488 05:01:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.488 05:01:38 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:41.489 node1=1024 expecting 1024 00:03:41.489 05:01:38 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:41.489 00:03:41.489 real 0m3.303s 00:03:41.489 user 0m1.351s 00:03:41.489 sys 0m2.022s 00:03:41.489 05:01:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:41.489 05:01:38 -- common/autotest_common.sh@10 -- # set +x 00:03:41.489 ************************************ 00:03:41.489 END TEST custom_alloc 00:03:41.489 ************************************ 00:03:41.749 05:01:38 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:41.749 05:01:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.749 05:01:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.749 05:01:38 -- common/autotest_common.sh@10 -- # set +x 00:03:41.749 ************************************ 00:03:41.749 START TEST no_shrink_alloc 00:03:41.749 ************************************ 00:03:41.749 05:01:38 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:41.749 05:01:38 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:41.749 05:01:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.749 05:01:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:41.749 05:01:38 -- setup/hugepages.sh@51 -- # shift 00:03:41.749 05:01:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:41.749 05:01:38 -- setup/hugepages.sh@52 -- # local node_ids 00:03:41.749 05:01:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.749 05:01:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.749 05:01:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:41.749 05:01:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:41.749 05:01:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.749 05:01:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.749 05:01:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:41.749 05:01:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.749 05:01:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.749 05:01:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:41.749 05:01:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:41.749 05:01:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:41.749 05:01:38 -- setup/hugepages.sh@73 -- # return 0 00:03:41.749 05:01:38 -- setup/hugepages.sh@198 -- # setup output 00:03:41.749 05:01:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.749 05:01:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:44.290 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:44.550 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.550 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.550 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.813 05:01:41 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:44.813 05:01:41 -- setup/hugepages.sh@89 -- # local node 00:03:44.813 05:01:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.813 05:01:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.813 05:01:41 -- setup/hugepages.sh@92 -- # local surp 00:03:44.813 05:01:41 -- setup/hugepages.sh@93 -- # local resv 00:03:44.813 05:01:41 -- setup/hugepages.sh@94 -- # local anon 00:03:44.813 05:01:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.813 05:01:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.813 05:01:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.813 05:01:41 -- setup/common.sh@18 -- # local node= 00:03:44.813 05:01:41 -- setup/common.sh@19 -- # local var val 00:03:44.813 05:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.813 05:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.813 05:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.813 05:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.813 05:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.813 05:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.813 05:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78060636 kB' 'MemAvailable: 81699264 kB' 'Buffers: 9380 kB' 'Cached: 9596408 kB' 'SwapCached: 0 kB' 'Active: 6439184 kB' 'Inactive: 3763032 kB' 'Active(anon): 6066160 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599272 kB' 'Mapped: 142272 kB' 'Shmem: 5469732 kB' 'KReclaimable: 197500 kB' 'Slab: 716504 kB' 'SReclaimable: 197500 kB' 'SUnreclaim: 519004 kB' 'KernelStack: 21184 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8430076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219516 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.813 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.813 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.814 05:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.814 05:01:41 -- setup/common.sh@33 -- # echo 0 00:03:44.814 05:01:41 -- setup/common.sh@33 -- # return 0 00:03:44.814 05:01:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.814 05:01:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.814 05:01:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.814 05:01:41 -- setup/common.sh@18 -- # local node= 00:03:44.814 05:01:41 -- setup/common.sh@19 -- # local var val 00:03:44.814 05:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.814 05:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.814 05:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.814 05:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.814 05:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.814 05:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.814 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78061136 kB' 'MemAvailable: 81699764 kB' 'Buffers: 9380 kB' 'Cached: 9596412 kB' 'SwapCached: 0 kB' 'Active: 6438408 kB' 'Inactive: 3763032 kB' 'Active(anon): 6065384 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599096 kB' 'Mapped: 141784 kB' 'Shmem: 5469736 kB' 'KReclaimable: 197500 kB' 'Slab: 716424 kB' 'SReclaimable: 197500 kB' 'SUnreclaim: 518924 kB' 'KernelStack: 21104 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8431604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219484 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.815 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.815 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.816 05:01:41 -- setup/common.sh@33 -- # echo 0 00:03:44.816 05:01:41 -- setup/common.sh@33 -- # return 0 00:03:44.816 05:01:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.816 05:01:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.816 05:01:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.816 05:01:41 -- setup/common.sh@18 -- # local node= 00:03:44.816 05:01:41 -- setup/common.sh@19 -- # local var val 00:03:44.816 05:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.816 05:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.816 05:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.816 05:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.816 05:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.816 05:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78059976 kB' 'MemAvailable: 81698604 kB' 'Buffers: 9380 kB' 'Cached: 9596420 kB' 'SwapCached: 0 kB' 'Active: 6439132 kB' 'Inactive: 3763032 kB' 'Active(anon): 6066108 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599720 kB' 'Mapped: 141776 kB' 'Shmem: 5469744 kB' 'KReclaimable: 197500 kB' 'Slab: 716532 kB' 'SReclaimable: 197500 kB' 'SUnreclaim: 519032 kB' 'KernelStack: 21168 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8431620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219564 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.816 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.816 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.817 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.817 05:01:41 -- setup/common.sh@33 -- # echo 0 00:03:44.817 05:01:41 -- setup/common.sh@33 -- # return 0 00:03:44.817 05:01:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.817 05:01:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.817 nr_hugepages=1024 00:03:44.817 05:01:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.817 resv_hugepages=0 00:03:44.817 05:01:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.817 surplus_hugepages=0 00:03:44.817 05:01:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.817 anon_hugepages=0 00:03:44.817 05:01:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.817 05:01:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.817 05:01:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.817 05:01:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.817 05:01:41 -- setup/common.sh@18 -- # local node= 00:03:44.817 05:01:41 -- setup/common.sh@19 -- # local var val 00:03:44.817 05:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.817 05:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.817 05:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.817 05:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.817 05:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.817 05:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.817 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78058568 kB' 'MemAvailable: 81697196 kB' 'Buffers: 9380 kB' 'Cached: 9596448 kB' 'SwapCached: 0 kB' 'Active: 6438356 kB' 'Inactive: 3763032 kB' 'Active(anon): 6065332 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598936 kB' 'Mapped: 141760 kB' 'Shmem: 5469772 kB' 'KReclaimable: 197500 kB' 'Slab: 716524 kB' 'SReclaimable: 197500 kB' 'SUnreclaim: 519024 kB' 'KernelStack: 21120 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8430120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219564 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.818 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.818 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.819 05:01:41 -- setup/common.sh@33 -- # echo 1024 00:03:44.819 05:01:41 -- setup/common.sh@33 -- # return 0 00:03:44.819 05:01:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.819 05:01:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.819 05:01:41 -- setup/hugepages.sh@27 -- # local node 00:03:44.819 05:01:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.819 05:01:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.819 05:01:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.819 05:01:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.819 05:01:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.819 05:01:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.819 05:01:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.819 05:01:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.819 05:01:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.819 05:01:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.819 05:01:41 -- setup/common.sh@18 -- # local node=0 00:03:44.819 05:01:41 -- setup/common.sh@19 -- # local var val 00:03:44.819 05:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.819 05:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.819 05:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.819 05:01:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.819 05:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.819 05:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 27913336 kB' 'MemUsed: 4717260 kB' 'SwapCached: 0 kB' 'Active: 1606780 kB' 'Inactive: 176940 kB' 'Active(anon): 1415596 kB' 'Inactive(anon): 0 kB' 'Active(file): 191184 kB' 'Inactive(file): 176940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1624536 kB' 'Mapped: 30840 kB' 'AnonPages: 162444 kB' 'Shmem: 1256412 kB' 'KernelStack: 9496 kB' 'PageTables: 3128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98092 kB' 'Slab: 374092 kB' 'SReclaimable: 98092 kB' 'SUnreclaim: 276000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.819 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.819 05:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # continue 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.820 05:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.820 05:01:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.820 05:01:41 -- setup/common.sh@33 -- # echo 0 00:03:44.820 05:01:41 -- setup/common.sh@33 -- # return 0 00:03:44.820 05:01:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.820 05:01:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.820 05:01:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.820 05:01:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.820 05:01:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.820 node0=1024 expecting 1024 00:03:44.820 05:01:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.820 05:01:41 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:44.820 05:01:41 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:44.820 05:01:41 -- setup/hugepages.sh@202 -- # setup output 00:03:44.820 05:01:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.820 05:01:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:47.362 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:47.934 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.934 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.934 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.934 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:47.934 05:01:44 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:47.934 05:01:44 -- setup/hugepages.sh@89 -- # local node 00:03:47.934 05:01:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.934 05:01:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.934 05:01:44 -- setup/hugepages.sh@92 -- # local surp 00:03:47.934 05:01:44 -- setup/hugepages.sh@93 -- # local resv 00:03:47.934 05:01:44 -- setup/hugepages.sh@94 -- # local anon 00:03:47.934 05:01:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.934 05:01:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.934 05:01:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.934 05:01:44 -- setup/common.sh@18 -- # local node= 00:03:47.934 05:01:44 -- setup/common.sh@19 -- # local var val 00:03:47.934 05:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.934 05:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.934 05:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.934 05:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.934 05:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.934 05:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78046948 kB' 'MemAvailable: 81685560 kB' 'Buffers: 9380 kB' 'Cached: 9596524 kB' 'SwapCached: 0 kB' 'Active: 6443932 kB' 'Inactive: 3763032 kB' 'Active(anon): 6070908 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604332 kB' 'Mapped: 142788 kB' 'Shmem: 5469848 kB' 'KReclaimable: 197468 kB' 'Slab: 716000 kB' 'SReclaimable: 197468 kB' 'SUnreclaim: 518532 kB' 'KernelStack: 20976 kB' 'PageTables: 7404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8433948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219472 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.934 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.934 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.935 05:01:44 -- setup/common.sh@33 -- # echo 0 00:03:47.935 05:01:44 -- setup/common.sh@33 -- # return 0 00:03:47.935 05:01:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.935 05:01:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.935 05:01:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.935 05:01:44 -- setup/common.sh@18 -- # local node= 00:03:47.935 05:01:44 -- setup/common.sh@19 -- # local var val 00:03:47.935 05:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.935 05:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.935 05:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.935 05:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.935 05:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.935 05:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78049448 kB' 'MemAvailable: 81688060 kB' 'Buffers: 9380 kB' 'Cached: 9596528 kB' 'SwapCached: 0 kB' 'Active: 6438132 kB' 'Inactive: 3763032 kB' 'Active(anon): 6065108 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598616 kB' 'Mapped: 142256 kB' 'Shmem: 5469852 kB' 'KReclaimable: 197468 kB' 'Slab: 716000 kB' 'SReclaimable: 197468 kB' 'SUnreclaim: 518532 kB' 'KernelStack: 20944 kB' 'PageTables: 7300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8427692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219404 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.935 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.935 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.936 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.936 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.937 05:01:44 -- setup/common.sh@33 -- # echo 0 00:03:47.937 05:01:44 -- setup/common.sh@33 -- # return 0 00:03:47.937 05:01:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.937 05:01:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.937 05:01:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.937 05:01:44 -- setup/common.sh@18 -- # local node= 00:03:47.937 05:01:44 -- setup/common.sh@19 -- # local var val 00:03:47.937 05:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.937 05:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.937 05:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.937 05:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.937 05:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.937 05:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.937 05:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78052900 kB' 'MemAvailable: 81691512 kB' 'Buffers: 9380 kB' 'Cached: 9596540 kB' 'SwapCached: 0 kB' 'Active: 6437892 kB' 'Inactive: 3763032 kB' 'Active(anon): 6064868 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598368 kB' 'Mapped: 141768 kB' 'Shmem: 5469864 kB' 'KReclaimable: 197468 kB' 'Slab: 715928 kB' 'SReclaimable: 197468 kB' 'SUnreclaim: 518460 kB' 'KernelStack: 20944 kB' 'PageTables: 7268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8427708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219388 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.937 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.937 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.938 05:01:44 -- setup/common.sh@32 -- # continue 00:03:47.938 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.199 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.199 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.199 05:01:44 -- setup/common.sh@33 -- # echo 0 00:03:48.199 05:01:44 -- setup/common.sh@33 -- # return 0 00:03:48.199 05:01:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.199 05:01:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.199 nr_hugepages=1024 00:03:48.200 05:01:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.200 resv_hugepages=0 00:03:48.200 05:01:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.200 surplus_hugepages=0 00:03:48.200 05:01:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.200 anon_hugepages=0 00:03:48.200 05:01:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.200 05:01:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.200 05:01:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.200 05:01:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.200 05:01:44 -- setup/common.sh@18 -- # local node= 00:03:48.200 05:01:44 -- setup/common.sh@19 -- # local var val 00:03:48.200 05:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.200 05:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.200 05:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.200 05:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.200 05:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.200 05:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312892 kB' 'MemFree: 78052900 kB' 'MemAvailable: 81691512 kB' 'Buffers: 9380 kB' 'Cached: 9596552 kB' 'SwapCached: 0 kB' 'Active: 6438000 kB' 'Inactive: 3763032 kB' 'Active(anon): 6064976 kB' 'Inactive(anon): 0 kB' 'Active(file): 373024 kB' 'Inactive(file): 3763032 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597944 kB' 'Mapped: 141768 kB' 'Shmem: 5469876 kB' 'KReclaimable: 197468 kB' 'Slab: 715928 kB' 'SReclaimable: 197468 kB' 'SUnreclaim: 518460 kB' 'KernelStack: 20944 kB' 'PageTables: 7268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996472 kB' 'Committed_AS: 8427724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219388 kB' 'VmallocChunk: 0 kB' 'Percpu: 61824 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 551892 kB' 'DirectMap2M: 9613312 kB' 'DirectMap1G: 93323264 kB' 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.200 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.200 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.201 05:01:44 -- setup/common.sh@33 -- # echo 1024 00:03:48.201 05:01:44 -- setup/common.sh@33 -- # return 0 00:03:48.201 05:01:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.201 05:01:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.201 05:01:44 -- setup/hugepages.sh@27 -- # local node 00:03:48.201 05:01:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.201 05:01:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.201 05:01:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.201 05:01:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.201 05:01:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.201 05:01:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.201 05:01:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.201 05:01:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.201 05:01:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.201 05:01:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.201 05:01:44 -- setup/common.sh@18 -- # local node=0 00:03:48.201 05:01:44 -- setup/common.sh@19 -- # local var val 00:03:48.201 05:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.201 05:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.201 05:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.201 05:01:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.201 05:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.201 05:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 27912728 kB' 'MemUsed: 4717868 kB' 'SwapCached: 0 kB' 'Active: 1605760 kB' 'Inactive: 176940 kB' 'Active(anon): 1414576 kB' 'Inactive(anon): 0 kB' 'Active(file): 191184 kB' 'Inactive(file): 176940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1624608 kB' 'Mapped: 30840 kB' 'AnonPages: 161216 kB' 'Shmem: 1256484 kB' 'KernelStack: 9448 kB' 'PageTables: 2984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98092 kB' 'Slab: 373616 kB' 'SReclaimable: 98092 kB' 'SUnreclaim: 275524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.201 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.201 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # continue 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.202 05:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.202 05:01:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.202 05:01:44 -- setup/common.sh@33 -- # echo 0 00:03:48.202 05:01:44 -- setup/common.sh@33 -- # return 0 00:03:48.202 05:01:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.202 05:01:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.202 05:01:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.202 05:01:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.202 05:01:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.202 node0=1024 expecting 1024 00:03:48.202 05:01:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.202 00:03:48.202 real 0m6.498s 00:03:48.202 user 0m2.653s 00:03:48.202 sys 0m3.980s 00:03:48.202 05:01:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.202 05:01:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.202 ************************************ 00:03:48.202 END TEST no_shrink_alloc 00:03:48.202 ************************************ 00:03:48.202 05:01:44 -- setup/hugepages.sh@217 -- # clear_hp 00:03:48.202 05:01:44 -- setup/hugepages.sh@37 -- # local node hp 00:03:48.202 05:01:44 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.202 05:01:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.202 05:01:44 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.202 05:01:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.202 05:01:44 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.202 05:01:44 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.202 05:01:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.202 05:01:44 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.202 05:01:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.202 05:01:44 -- setup/hugepages.sh@41 -- # echo 0 00:03:48.202 05:01:44 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.202 05:01:44 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.202 00:03:48.202 real 0m24.395s 00:03:48.202 user 0m9.745s 00:03:48.202 sys 0m14.398s 00:03:48.202 05:01:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.202 05:01:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.202 ************************************ 00:03:48.202 END TEST hugepages 00:03:48.202 ************************************ 00:03:48.202 05:01:44 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/driver.sh 00:03:48.202 05:01:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.202 05:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.202 05:01:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.202 ************************************ 00:03:48.202 START TEST driver 00:03:48.202 ************************************ 00:03:48.202 05:01:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/driver.sh 00:03:48.202 * Looking for test storage... 00:03:48.202 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:48.202 05:01:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:48.202 05:01:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:48.202 05:01:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:48.463 05:01:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:48.463 05:01:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:48.463 05:01:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:48.463 05:01:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:48.463 05:01:45 -- scripts/common.sh@335 -- # IFS=.-: 00:03:48.463 05:01:45 -- scripts/common.sh@335 -- # read -ra ver1 00:03:48.463 05:01:45 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.463 05:01:45 -- scripts/common.sh@336 -- # read -ra ver2 00:03:48.463 05:01:45 -- scripts/common.sh@337 -- # local 'op=<' 00:03:48.463 05:01:45 -- scripts/common.sh@339 -- # ver1_l=2 00:03:48.463 05:01:45 -- scripts/common.sh@340 -- # ver2_l=1 00:03:48.463 05:01:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:48.463 05:01:45 -- scripts/common.sh@343 -- # case "$op" in 00:03:48.463 05:01:45 -- scripts/common.sh@344 -- # : 1 00:03:48.463 05:01:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:48.463 05:01:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.463 05:01:45 -- scripts/common.sh@364 -- # decimal 1 00:03:48.463 05:01:45 -- scripts/common.sh@352 -- # local d=1 00:03:48.463 05:01:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.463 05:01:45 -- scripts/common.sh@354 -- # echo 1 00:03:48.463 05:01:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:48.463 05:01:45 -- scripts/common.sh@365 -- # decimal 2 00:03:48.463 05:01:45 -- scripts/common.sh@352 -- # local d=2 00:03:48.463 05:01:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.463 05:01:45 -- scripts/common.sh@354 -- # echo 2 00:03:48.463 05:01:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:48.463 05:01:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:48.463 05:01:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:48.463 05:01:45 -- scripts/common.sh@367 -- # return 0 00:03:48.463 05:01:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.463 05:01:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:48.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.463 --rc genhtml_branch_coverage=1 00:03:48.463 --rc genhtml_function_coverage=1 00:03:48.463 --rc genhtml_legend=1 00:03:48.463 --rc geninfo_all_blocks=1 00:03:48.463 --rc geninfo_unexecuted_blocks=1 00:03:48.463 00:03:48.463 ' 00:03:48.463 05:01:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:48.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.463 --rc genhtml_branch_coverage=1 00:03:48.463 --rc genhtml_function_coverage=1 00:03:48.463 --rc genhtml_legend=1 00:03:48.463 --rc geninfo_all_blocks=1 00:03:48.463 --rc geninfo_unexecuted_blocks=1 00:03:48.463 00:03:48.463 ' 00:03:48.463 05:01:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:48.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.463 --rc genhtml_branch_coverage=1 00:03:48.463 --rc genhtml_function_coverage=1 00:03:48.463 --rc genhtml_legend=1 00:03:48.463 --rc geninfo_all_blocks=1 00:03:48.463 --rc geninfo_unexecuted_blocks=1 00:03:48.463 00:03:48.463 ' 00:03:48.463 05:01:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:48.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.463 --rc genhtml_branch_coverage=1 00:03:48.463 --rc genhtml_function_coverage=1 00:03:48.463 --rc genhtml_legend=1 00:03:48.463 --rc geninfo_all_blocks=1 00:03:48.463 --rc geninfo_unexecuted_blocks=1 00:03:48.463 00:03:48.463 ' 00:03:48.463 05:01:45 -- setup/driver.sh@68 -- # setup reset 00:03:48.463 05:01:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.463 05:01:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.661 05:01:49 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:52.661 05:01:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.661 05:01:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.661 05:01:49 -- common/autotest_common.sh@10 -- # set +x 00:03:52.661 ************************************ 00:03:52.661 START TEST guess_driver 00:03:52.661 ************************************ 00:03:52.661 05:01:49 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:52.661 05:01:49 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:52.661 05:01:49 -- setup/driver.sh@47 -- # local fail=0 00:03:52.661 05:01:49 -- setup/driver.sh@49 -- # pick_driver 00:03:52.661 05:01:49 -- setup/driver.sh@36 -- # vfio 00:03:52.661 05:01:49 -- setup/driver.sh@21 -- # local iommu_grups 00:03:52.661 05:01:49 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:52.661 05:01:49 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:52.661 05:01:49 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:52.661 05:01:49 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:52.661 05:01:49 -- setup/driver.sh@29 -- # (( 172 > 0 )) 00:03:52.661 05:01:49 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:52.661 05:01:49 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:52.661 05:01:49 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:52.661 05:01:49 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:52.661 05:01:49 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:52.661 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:52.661 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:52.661 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:52.661 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:52.661 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:52.661 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:52.661 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:52.661 05:01:49 -- setup/driver.sh@30 -- # return 0 00:03:52.661 05:01:49 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:52.661 05:01:49 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:52.661 05:01:49 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:52.661 05:01:49 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:52.661 Looking for driver=vfio-pci 00:03:52.661 05:01:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.661 05:01:49 -- setup/driver.sh@45 -- # setup output config 00:03:52.661 05:01:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.661 05:01:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ denied == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # continue 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.956 05:01:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.956 05:01:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.956 05:01:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.896 05:01:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.896 05:01:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.896 05:01:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.896 05:01:53 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:56.896 05:01:53 -- setup/driver.sh@65 -- # setup reset 00:03:56.896 05:01:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.896 05:01:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.097 00:04:01.097 real 0m8.445s 00:04:01.097 user 0m2.577s 00:04:01.097 sys 0m4.377s 00:04:01.097 05:01:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.097 05:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:01.097 ************************************ 00:04:01.097 END TEST guess_driver 00:04:01.097 ************************************ 00:04:01.097 00:04:01.097 real 0m12.980s 00:04:01.097 user 0m3.999s 00:04:01.097 sys 0m6.746s 00:04:01.097 05:01:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.097 05:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:01.097 ************************************ 00:04:01.097 END TEST driver 00:04:01.097 ************************************ 00:04:01.097 05:01:57 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/devices.sh 00:04:01.097 05:01:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.097 05:01:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.097 05:01:57 -- common/autotest_common.sh@10 -- # set +x 00:04:01.097 ************************************ 00:04:01.097 START TEST devices 00:04:01.097 ************************************ 00:04:01.097 05:01:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/devices.sh 00:04:01.356 * Looking for test storage... 00:04:01.356 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:04:01.356 05:01:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:01.356 05:01:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:01.356 05:01:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:01.356 05:01:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:01.356 05:01:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:01.356 05:01:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:01.356 05:01:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:01.356 05:01:58 -- scripts/common.sh@335 -- # IFS=.-: 00:04:01.356 05:01:58 -- scripts/common.sh@335 -- # read -ra ver1 00:04:01.356 05:01:58 -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.356 05:01:58 -- scripts/common.sh@336 -- # read -ra ver2 00:04:01.356 05:01:58 -- scripts/common.sh@337 -- # local 'op=<' 00:04:01.356 05:01:58 -- scripts/common.sh@339 -- # ver1_l=2 00:04:01.356 05:01:58 -- scripts/common.sh@340 -- # ver2_l=1 00:04:01.356 05:01:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:01.356 05:01:58 -- scripts/common.sh@343 -- # case "$op" in 00:04:01.356 05:01:58 -- scripts/common.sh@344 -- # : 1 00:04:01.356 05:01:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:01.356 05:01:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.356 05:01:58 -- scripts/common.sh@364 -- # decimal 1 00:04:01.356 05:01:58 -- scripts/common.sh@352 -- # local d=1 00:04:01.356 05:01:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.356 05:01:58 -- scripts/common.sh@354 -- # echo 1 00:04:01.356 05:01:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:01.356 05:01:58 -- scripts/common.sh@365 -- # decimal 2 00:04:01.356 05:01:58 -- scripts/common.sh@352 -- # local d=2 00:04:01.357 05:01:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.357 05:01:58 -- scripts/common.sh@354 -- # echo 2 00:04:01.357 05:01:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:01.357 05:01:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:01.357 05:01:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:01.357 05:01:58 -- scripts/common.sh@367 -- # return 0 00:04:01.357 05:01:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.357 05:01:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:01.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.357 --rc genhtml_branch_coverage=1 00:04:01.357 --rc genhtml_function_coverage=1 00:04:01.357 --rc genhtml_legend=1 00:04:01.357 --rc geninfo_all_blocks=1 00:04:01.357 --rc geninfo_unexecuted_blocks=1 00:04:01.357 00:04:01.357 ' 00:04:01.357 05:01:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:01.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.357 --rc genhtml_branch_coverage=1 00:04:01.357 --rc genhtml_function_coverage=1 00:04:01.357 --rc genhtml_legend=1 00:04:01.357 --rc geninfo_all_blocks=1 00:04:01.357 --rc geninfo_unexecuted_blocks=1 00:04:01.357 00:04:01.357 ' 00:04:01.357 05:01:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:01.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.357 --rc genhtml_branch_coverage=1 00:04:01.357 --rc genhtml_function_coverage=1 00:04:01.357 --rc genhtml_legend=1 00:04:01.357 --rc geninfo_all_blocks=1 00:04:01.357 --rc geninfo_unexecuted_blocks=1 00:04:01.357 00:04:01.357 ' 00:04:01.357 05:01:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:01.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.357 --rc genhtml_branch_coverage=1 00:04:01.357 --rc genhtml_function_coverage=1 00:04:01.357 --rc genhtml_legend=1 00:04:01.357 --rc geninfo_all_blocks=1 00:04:01.357 --rc geninfo_unexecuted_blocks=1 00:04:01.357 00:04:01.357 ' 00:04:01.357 05:01:58 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:01.357 05:01:58 -- setup/devices.sh@192 -- # setup reset 00:04:01.357 05:01:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.357 05:01:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.570 05:02:01 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:05.570 05:02:01 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:05.570 05:02:01 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:05.570 05:02:01 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:05.570 05:02:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.570 05:02:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:05.570 05:02:01 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:05.570 05:02:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.570 05:02:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.570 05:02:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.570 05:02:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:05.570 05:02:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:05.570 05:02:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.570 05:02:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.570 05:02:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.570 05:02:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:05.570 05:02:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:05.570 05:02:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:05.570 05:02:01 -- common/autotest_common.sh@1660 -- # [[ host-managed != none ]] 00:04:05.570 05:02:01 -- common/autotest_common.sh@1669 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:04:05.570 05:02:01 -- setup/devices.sh@196 -- # blocks=() 00:04:05.570 05:02:01 -- setup/devices.sh@196 -- # declare -a blocks 00:04:05.570 05:02:01 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:05.570 05:02:01 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:05.570 05:02:01 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:05.570 05:02:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.570 05:02:01 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:05.570 05:02:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:05.570 05:02:01 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:05.570 05:02:01 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:05.570 05:02:01 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:05.570 05:02:01 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:05.570 05:02:01 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:05.570 No valid GPT data, bailing 00:04:05.570 05:02:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.570 05:02:01 -- scripts/common.sh@393 -- # pt= 00:04:05.570 05:02:01 -- scripts/common.sh@394 -- # return 1 00:04:05.570 05:02:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:05.570 05:02:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:05.570 05:02:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:05.570 05:02:01 -- setup/common.sh@80 -- # echo 1000204886016 00:04:05.570 05:02:01 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:05.570 05:02:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.570 05:02:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:05.571 05:02:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.571 05:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:05.571 05:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:05.571 05:02:01 -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:05.571 05:02:01 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:05.571 05:02:01 -- setup/devices.sh@203 -- # continue 00:04:05.571 05:02:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.571 05:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:05.571 05:02:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:05.571 05:02:01 -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:05.571 05:02:01 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:05.571 05:02:01 -- setup/devices.sh@203 -- # continue 00:04:05.571 05:02:01 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:05.571 05:02:01 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:05.571 05:02:01 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:05.571 05:02:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.571 05:02:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.571 05:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:05.571 ************************************ 00:04:05.571 START TEST nvme_mount 00:04:05.571 ************************************ 00:04:05.571 05:02:01 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:05.571 05:02:01 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:05.571 05:02:01 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:05.571 05:02:01 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.571 05:02:01 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.571 05:02:01 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:05.571 05:02:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:05.571 05:02:01 -- setup/common.sh@40 -- # local part_no=1 00:04:05.571 05:02:01 -- setup/common.sh@41 -- # local size=1073741824 00:04:05.571 05:02:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:05.571 05:02:01 -- setup/common.sh@44 -- # parts=() 00:04:05.571 05:02:01 -- setup/common.sh@44 -- # local parts 00:04:05.571 05:02:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:05.571 05:02:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.571 05:02:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:05.571 05:02:01 -- setup/common.sh@46 -- # (( part++ )) 00:04:05.571 05:02:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.571 05:02:01 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:05.571 05:02:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:05.571 05:02:01 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:05.832 Creating new GPT entries in memory. 00:04:05.832 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:05.832 other utilities. 00:04:05.832 05:02:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:05.832 05:02:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.832 05:02:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.832 05:02:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.832 05:02:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:07.213 Creating new GPT entries in memory. 00:04:07.213 The operation has completed successfully. 00:04:07.213 05:02:03 -- setup/common.sh@57 -- # (( part++ )) 00:04:07.213 05:02:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.213 05:02:03 -- setup/common.sh@62 -- # wait 84791 00:04:07.213 05:02:03 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.213 05:02:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:07.213 05:02:03 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.213 05:02:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:07.213 05:02:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:07.213 05:02:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.213 05:02:03 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.213 05:02:03 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:07.213 05:02:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:07.213 05:02:03 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.213 05:02:03 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.213 05:02:03 -- setup/devices.sh@53 -- # local found=0 00:04:07.213 05:02:03 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.213 05:02:03 -- setup/devices.sh@56 -- # : 00:04:07.213 05:02:03 -- setup/devices.sh@59 -- # local pci status 00:04:07.213 05:02:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.213 05:02:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:07.213 05:02:03 -- setup/devices.sh@47 -- # setup output config 00:04:07.213 05:02:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.213 05:02:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:04:09.755 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.755 05:02:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:09.755 05:02:06 -- setup/devices.sh@63 -- # found=1 00:04:09.755 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.755 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.755 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.015 05:02:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.015 05:02:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.276 05:02:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.276 05:02:06 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:10.276 05:02:06 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.276 05:02:06 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.276 05:02:06 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.276 05:02:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:10.276 05:02:06 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.276 05:02:06 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.276 05:02:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.276 05:02:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.276 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.276 05:02:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.276 05:02:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.536 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.536 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.536 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.536 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.536 05:02:07 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:10.536 05:02:07 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:10.537 05:02:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.537 05:02:07 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:10.537 05:02:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:10.537 05:02:07 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.537 05:02:07 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.537 05:02:07 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:10.537 05:02:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:10.537 05:02:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.537 05:02:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.537 05:02:07 -- setup/devices.sh@53 -- # local found=0 00:04:10.537 05:02:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.537 05:02:07 -- setup/devices.sh@56 -- # : 00:04:10.537 05:02:07 -- setup/devices.sh@59 -- # local pci status 00:04:10.537 05:02:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:10.537 05:02:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.537 05:02:07 -- setup/devices.sh@47 -- # setup output config 00:04:10.537 05:02:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.537 05:02:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:04:13.079 05:02:09 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.079 05:02:09 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:13.079 05:02:09 -- setup/devices.sh@63 -- # found=1 00:04:13.079 05:02:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.366 05:02:09 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.367 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.367 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.627 05:02:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.627 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.627 05:02:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.627 05:02:10 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:13.627 05:02:10 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.627 05:02:10 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.627 05:02:10 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.627 05:02:10 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.627 05:02:10 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:13.627 05:02:10 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:13.627 05:02:10 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:13.627 05:02:10 -- setup/devices.sh@50 -- # local mount_point= 00:04:13.627 05:02:10 -- setup/devices.sh@51 -- # local test_file= 00:04:13.627 05:02:10 -- setup/devices.sh@53 -- # local found=0 00:04:13.627 05:02:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:13.627 05:02:10 -- setup/devices.sh@59 -- # local pci status 00:04:13.627 05:02:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.627 05:02:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:13.627 05:02:10 -- setup/devices.sh@47 -- # setup output config 00:04:13.627 05:02:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.627 05:02:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:16.923 05:02:13 -- setup/devices.sh@63 -- # found=1 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.923 05:02:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.923 05:02:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:16.923 05:02:13 -- setup/devices.sh@68 -- # return 0 00:04:16.923 05:02:13 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:16.923 05:02:13 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.923 05:02:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.923 05:02:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.923 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.923 00:04:16.923 real 0m11.953s 00:04:16.923 user 0m3.714s 00:04:16.923 sys 0m6.073s 00:04:16.923 05:02:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.923 05:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:16.923 ************************************ 00:04:16.923 END TEST nvme_mount 00:04:16.923 ************************************ 00:04:16.923 05:02:13 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:16.923 05:02:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.923 05:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.923 05:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:16.923 ************************************ 00:04:16.923 START TEST dm_mount 00:04:16.923 ************************************ 00:04:16.923 05:02:13 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:16.923 05:02:13 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:16.923 05:02:13 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:16.923 05:02:13 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:16.923 05:02:13 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:16.923 05:02:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.923 05:02:13 -- setup/common.sh@40 -- # local part_no=2 00:04:16.923 05:02:13 -- setup/common.sh@41 -- # local size=1073741824 00:04:16.923 05:02:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.923 05:02:13 -- setup/common.sh@44 -- # parts=() 00:04:16.923 05:02:13 -- setup/common.sh@44 -- # local parts 00:04:16.923 05:02:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.923 05:02:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.923 05:02:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.923 05:02:13 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.923 05:02:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.923 05:02:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.923 05:02:13 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.923 05:02:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.923 05:02:13 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.923 05:02:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.923 05:02:13 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:17.863 Creating new GPT entries in memory. 00:04:17.863 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.863 other utilities. 00:04:17.863 05:02:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.863 05:02:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.863 05:02:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.863 05:02:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.863 05:02:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.802 Creating new GPT entries in memory. 00:04:18.802 The operation has completed successfully. 00:04:18.802 05:02:15 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.802 05:02:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.802 05:02:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.802 05:02:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.802 05:02:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:20.183 The operation has completed successfully. 00:04:20.183 05:02:16 -- setup/common.sh@57 -- # (( part++ )) 00:04:20.183 05:02:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.183 05:02:16 -- setup/common.sh@62 -- # wait 89334 00:04:20.183 05:02:16 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:20.183 05:02:16 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:20.183 05:02:16 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.183 05:02:16 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:20.183 05:02:16 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:20.183 05:02:16 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.183 05:02:16 -- setup/devices.sh@161 -- # break 00:04:20.183 05:02:16 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.183 05:02:16 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:20.183 05:02:16 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:20.183 05:02:16 -- setup/devices.sh@166 -- # dm=dm-0 00:04:20.183 05:02:16 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:20.183 05:02:16 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:20.183 05:02:16 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:20.183 05:02:16 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount size= 00:04:20.183 05:02:16 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:20.183 05:02:16 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.183 05:02:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:20.183 05:02:16 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:20.183 05:02:16 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.183 05:02:16 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:20.183 05:02:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:20.183 05:02:16 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:20.183 05:02:16 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.183 05:02:16 -- setup/devices.sh@53 -- # local found=0 00:04:20.183 05:02:16 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:20.183 05:02:16 -- setup/devices.sh@56 -- # : 00:04:20.183 05:02:16 -- setup/devices.sh@59 -- # local pci status 00:04:20.183 05:02:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.183 05:02:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:20.183 05:02:16 -- setup/devices.sh@47 -- # setup output config 00:04:20.183 05:02:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.183 05:02:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:04:22.723 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.723 05:02:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.723 05:02:19 -- setup/devices.sh@63 -- # found=1 00:04:22.723 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.723 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.723 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.983 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.983 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.984 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.984 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.984 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.984 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.984 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.984 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.984 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.984 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.984 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.984 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.984 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.984 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.243 05:02:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.243 05:02:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:23.244 05:02:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:23.244 05:02:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:23.244 05:02:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:23.244 05:02:19 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:23.244 05:02:19 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:23.244 05:02:19 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:23.244 05:02:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:23.244 05:02:19 -- setup/devices.sh@50 -- # local mount_point= 00:04:23.244 05:02:19 -- setup/devices.sh@51 -- # local test_file= 00:04:23.244 05:02:19 -- setup/devices.sh@53 -- # local found=0 00:04:23.244 05:02:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.244 05:02:19 -- setup/devices.sh@59 -- # local pci status 00:04:23.244 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.244 05:02:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:23.244 05:02:19 -- setup/devices.sh@47 -- # setup output config 00:04:23.244 05:02:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.244 05:02:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:04:25.785 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.785 05:02:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:25.785 05:02:22 -- setup/devices.sh@63 -- # found=1 00:04:25.785 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.785 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.785 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 05:02:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:26.045 05:02:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.305 05:02:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.305 05:02:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.305 05:02:23 -- setup/devices.sh@68 -- # return 0 00:04:26.305 05:02:23 -- setup/devices.sh@187 -- # cleanup_dm 00:04:26.305 05:02:23 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:26.305 05:02:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.305 05:02:23 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:26.305 05:02:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.305 05:02:23 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:26.305 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.305 05:02:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.306 05:02:23 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:26.306 00:04:26.306 real 0m9.499s 00:04:26.306 user 0m2.429s 00:04:26.306 sys 0m4.101s 00:04:26.306 05:02:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.306 05:02:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.306 ************************************ 00:04:26.306 END TEST dm_mount 00:04:26.306 ************************************ 00:04:26.306 05:02:23 -- setup/devices.sh@1 -- # cleanup 00:04:26.306 05:02:23 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:26.306 05:02:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.306 05:02:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.306 05:02:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.306 05:02:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.306 05:02:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.565 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:26.565 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:26.565 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.565 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.565 05:02:23 -- setup/devices.sh@12 -- # cleanup_dm 00:04:26.565 05:02:23 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:26.825 05:02:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.825 05:02:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.825 05:02:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.825 05:02:23 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.825 05:02:23 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:26.825 00:04:26.825 real 0m25.484s 00:04:26.825 user 0m7.645s 00:04:26.825 sys 0m12.594s 00:04:26.825 05:02:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.825 05:02:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.825 ************************************ 00:04:26.825 END TEST devices 00:04:26.825 ************************************ 00:04:26.825 00:04:26.825 real 1m24.979s 00:04:26.825 user 0m29.122s 00:04:26.825 sys 0m46.894s 00:04:26.825 05:02:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.825 05:02:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.825 ************************************ 00:04:26.825 END TEST setup.sh 00:04:26.825 ************************************ 00:04:26.825 05:02:23 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:04:29.365 Hugepages 00:04:29.365 node hugesize free / total 00:04:29.365 node0 1048576kB 0 / 0 00:04:29.365 node0 2048kB 2048 / 2048 00:04:29.365 node1 1048576kB 0 / 0 00:04:29.365 node1 2048kB 0 / 0 00:04:29.365 00:04:29.365 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.365 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:29.365 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:29.365 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:29.365 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:29.365 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:29.365 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:29.365 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:29.365 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:29.626 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:29.626 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:04:29.626 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:29.626 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:29.626 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:29.626 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:29.626 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:29.626 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:29.626 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:29.626 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:29.626 05:02:26 -- spdk/autotest.sh@128 -- # uname -s 00:04:29.626 05:02:26 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:29.626 05:02:26 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:29.626 05:02:26 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:04:32.923 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:32.923 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:32.923 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:33.494 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:33.752 05:02:30 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:34.692 05:02:31 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:34.692 05:02:31 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:34.692 05:02:31 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.692 05:02:31 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:34.692 05:02:31 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:34.692 05:02:31 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:34.692 05:02:31 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.692 05:02:31 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.692 05:02:31 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:34.692 05:02:31 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:34.692 05:02:31 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0 00:04:34.692 05:02:31 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.989 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:37.989 Waiting for block devices as requested 00:04:37.989 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:37.989 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:37.989 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:37.989 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.249 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:38.249 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:38.249 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:38.249 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:38.509 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:38.509 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:38.509 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:38.770 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.770 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:38.770 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:38.770 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:39.030 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:39.030 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:39.030 05:02:35 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:39.030 05:02:35 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:39.030 05:02:35 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:39.030 05:02:35 -- common/autotest_common.sh@1497 -- # grep 0000:5e:00.0/nvme/nvme 00:04:39.030 05:02:35 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:39.030 05:02:35 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:39.030 05:02:35 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:39.030 05:02:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:39.030 05:02:35 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:39.030 05:02:35 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:39.030 05:02:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:39.030 05:02:35 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:39.030 05:02:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.030 05:02:35 -- common/autotest_common.sh@1540 -- # oacs=' 0xf' 00:04:39.030 05:02:35 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:39.031 05:02:35 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:39.291 05:02:35 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:39.291 05:02:35 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:39.291 05:02:35 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:39.291 05:02:35 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:39.291 05:02:35 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:39.291 05:02:35 -- common/autotest_common.sh@1552 -- # continue 00:04:39.291 05:02:35 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:39.291 05:02:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.291 05:02:35 -- common/autotest_common.sh@10 -- # set +x 00:04:39.291 05:02:35 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:39.291 05:02:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.291 05:02:35 -- common/autotest_common.sh@10 -- # set +x 00:04:39.291 05:02:35 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:04:41.831 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:42.091 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.091 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.350 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.287 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.287 05:02:39 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:43.287 05:02:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.287 05:02:39 -- common/autotest_common.sh@10 -- # set +x 00:04:43.287 05:02:39 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:43.287 05:02:39 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:43.287 05:02:39 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.287 05:02:39 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:43.287 05:02:39 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:43.287 05:02:39 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:43.287 05:02:39 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:43.287 05:02:39 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:43.287 05:02:39 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.287 05:02:39 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.287 05:02:39 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:43.287 05:02:40 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:43.287 05:02:40 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0 00:04:43.287 05:02:40 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:43.287 05:02:40 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:43.287 05:02:40 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:04:43.287 05:02:40 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:43.287 05:02:40 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:04:43.287 05:02:40 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:5e:00.0 00:04:43.287 05:02:40 -- common/autotest_common.sh@1587 -- # [[ -z 0000:5e:00.0 ]] 00:04:43.287 05:02:40 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.287 05:02:40 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=98673 00:04:43.287 05:02:40 -- common/autotest_common.sh@1593 -- # waitforlisten 98673 00:04:43.287 05:02:40 -- common/autotest_common.sh@829 -- # '[' -z 98673 ']' 00:04:43.287 05:02:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.287 05:02:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.287 05:02:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.287 05:02:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.287 05:02:40 -- common/autotest_common.sh@10 -- # set +x 00:04:43.547 [2024-11-20 05:02:40.128653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:43.547 [2024-11-20 05:02:40.128703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98673 ] 00:04:43.547 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.547 [2024-11-20 05:02:40.197219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.547 [2024-11-20 05:02:40.271424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:43.547 [2024-11-20 05:02:40.271554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.118 05:02:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.118 05:02:40 -- common/autotest_common.sh@862 -- # return 0 00:04:44.118 05:02:40 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:04:44.118 05:02:40 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:04:44.118 05:02:40 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:47.409 nvme0n1 00:04:47.409 05:02:43 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:47.409 [2024-11-20 05:02:44.092699] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:47.409 [2024-11-20 05:02:44.092729] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:47.409 request: 00:04:47.409 { 00:04:47.409 "nvme_ctrlr_name": "nvme0", 00:04:47.409 "password": "test", 00:04:47.409 "method": "bdev_nvme_opal_revert", 00:04:47.409 "req_id": 1 00:04:47.409 } 00:04:47.409 Got JSON-RPC error response 00:04:47.409 response: 00:04:47.409 { 00:04:47.409 "code": -32603, 00:04:47.409 "message": "Internal error" 00:04:47.409 } 00:04:47.409 05:02:44 -- common/autotest_common.sh@1599 -- # true 00:04:47.409 05:02:44 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:04:47.409 05:02:44 -- common/autotest_common.sh@1603 -- # killprocess 98673 00:04:47.409 05:02:44 -- common/autotest_common.sh@936 -- # '[' -z 98673 ']' 00:04:47.409 05:02:44 -- common/autotest_common.sh@940 -- # kill -0 98673 00:04:47.409 05:02:44 -- common/autotest_common.sh@941 -- # uname 00:04:47.409 05:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.409 05:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98673 00:04:47.409 05:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.409 05:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.409 05:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98673' 00:04:47.409 killing process with pid 98673 00:04:47.409 05:02:44 -- common/autotest_common.sh@955 -- # kill 98673 00:04:47.409 05:02:44 -- common/autotest_common.sh@960 -- # wait 98673 00:04:49.318 05:02:45 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:49.318 05:02:45 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:49.318 05:02:45 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:49.318 05:02:45 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:49.318 05:02:45 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:49.318 05:02:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.318 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:04:49.318 05:02:45 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:04:49.318 05:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.318 05:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.318 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:04:49.318 ************************************ 00:04:49.318 START TEST env 00:04:49.318 ************************************ 00:04:49.318 05:02:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:04:49.318 * Looking for test storage... 00:04:49.318 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env 00:04:49.318 05:02:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.318 05:02:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.318 05:02:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.318 05:02:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.318 05:02:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.318 05:02:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.318 05:02:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.318 05:02:45 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.318 05:02:45 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.318 05:02:45 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.318 05:02:45 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.318 05:02:45 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.318 05:02:45 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.318 05:02:45 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.318 05:02:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.318 05:02:45 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.318 05:02:45 -- scripts/common.sh@344 -- # : 1 00:04:49.318 05:02:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.318 05:02:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.318 05:02:45 -- scripts/common.sh@364 -- # decimal 1 00:04:49.318 05:02:45 -- scripts/common.sh@352 -- # local d=1 00:04:49.318 05:02:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.318 05:02:45 -- scripts/common.sh@354 -- # echo 1 00:04:49.318 05:02:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.318 05:02:45 -- scripts/common.sh@365 -- # decimal 2 00:04:49.318 05:02:45 -- scripts/common.sh@352 -- # local d=2 00:04:49.318 05:02:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.318 05:02:45 -- scripts/common.sh@354 -- # echo 2 00:04:49.318 05:02:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.318 05:02:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.318 05:02:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.318 05:02:45 -- scripts/common.sh@367 -- # return 0 00:04:49.318 05:02:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.318 05:02:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.318 --rc genhtml_branch_coverage=1 00:04:49.318 --rc genhtml_function_coverage=1 00:04:49.318 --rc genhtml_legend=1 00:04:49.318 --rc geninfo_all_blocks=1 00:04:49.318 --rc geninfo_unexecuted_blocks=1 00:04:49.318 00:04:49.318 ' 00:04:49.318 05:02:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.318 --rc genhtml_branch_coverage=1 00:04:49.318 --rc genhtml_function_coverage=1 00:04:49.318 --rc genhtml_legend=1 00:04:49.318 --rc geninfo_all_blocks=1 00:04:49.318 --rc geninfo_unexecuted_blocks=1 00:04:49.318 00:04:49.318 ' 00:04:49.318 05:02:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.318 --rc genhtml_branch_coverage=1 00:04:49.318 --rc genhtml_function_coverage=1 00:04:49.319 --rc genhtml_legend=1 00:04:49.319 --rc geninfo_all_blocks=1 00:04:49.319 --rc geninfo_unexecuted_blocks=1 00:04:49.319 00:04:49.319 ' 00:04:49.319 05:02:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.319 --rc genhtml_branch_coverage=1 00:04:49.319 --rc genhtml_function_coverage=1 00:04:49.319 --rc genhtml_legend=1 00:04:49.319 --rc geninfo_all_blocks=1 00:04:49.319 --rc geninfo_unexecuted_blocks=1 00:04:49.319 00:04:49.319 ' 00:04:49.319 05:02:45 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.319 05:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.319 05:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.319 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:04:49.319 ************************************ 00:04:49.319 START TEST env_memory 00:04:49.319 ************************************ 00:04:49.319 05:02:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.319 00:04:49.319 00:04:49.319 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.319 http://cunit.sourceforge.net/ 00:04:49.319 00:04:49.319 00:04:49.319 Suite: memory 00:04:49.319 Test: alloc and free memory map ...[2024-11-20 05:02:46.002619] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.319 passed 00:04:49.319 Test: mem map translation ...[2024-11-20 05:02:46.022223] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.319 [2024-11-20 05:02:46.022238] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.319 [2024-11-20 05:02:46.022271] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.319 [2024-11-20 05:02:46.022276] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.319 passed 00:04:49.319 Test: mem map registration ...[2024-11-20 05:02:46.063238] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:49.319 [2024-11-20 05:02:46.063253] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:49.319 passed 00:04:49.319 Test: mem map adjacent registrations ...passed 00:04:49.319 00:04:49.319 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.319 suites 1 1 n/a 0 0 00:04:49.319 tests 4 4 4 0 0 00:04:49.319 asserts 152 152 152 0 n/a 00:04:49.319 00:04:49.319 Elapsed time = 0.130 seconds 00:04:49.319 00:04:49.319 real 0m0.139s 00:04:49.319 user 0m0.129s 00:04:49.319 sys 0m0.009s 00:04:49.319 05:02:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.319 05:02:46 -- common/autotest_common.sh@10 -- # set +x 00:04:49.319 ************************************ 00:04:49.319 END TEST env_memory 00:04:49.319 ************************************ 00:04:49.580 05:02:46 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.580 05:02:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.580 05:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.580 05:02:46 -- common/autotest_common.sh@10 -- # set +x 00:04:49.580 ************************************ 00:04:49.580 START TEST env_vtophys 00:04:49.580 ************************************ 00:04:49.580 05:02:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.580 EAL: lib.eal log level changed from notice to debug 00:04:49.580 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.580 EAL: Detected lcore 1 as core 1 on socket 0 00:04:49.580 EAL: Detected lcore 2 as core 2 on socket 0 00:04:49.580 EAL: Detected lcore 3 as core 3 on socket 0 00:04:49.580 EAL: Detected lcore 4 as core 4 on socket 0 00:04:49.580 EAL: Detected lcore 5 as core 5 on socket 0 00:04:49.580 EAL: Detected lcore 6 as core 6 on socket 0 00:04:49.580 EAL: Detected lcore 7 as core 8 on socket 0 00:04:49.580 EAL: Detected lcore 8 as core 9 on socket 0 00:04:49.580 EAL: Detected lcore 9 as core 10 on socket 0 00:04:49.580 EAL: Detected lcore 10 as core 11 on socket 0 00:04:49.580 EAL: Detected lcore 11 as core 12 on socket 0 00:04:49.580 EAL: Detected lcore 12 as core 13 on socket 0 00:04:49.580 EAL: Detected lcore 13 as core 16 on socket 0 00:04:49.580 EAL: Detected lcore 14 as core 17 on socket 0 00:04:49.580 EAL: Detected lcore 15 as core 18 on socket 0 00:04:49.580 EAL: Detected lcore 16 as core 19 on socket 0 00:04:49.580 EAL: Detected lcore 17 as core 20 on socket 0 00:04:49.580 EAL: Detected lcore 18 as core 21 on socket 0 00:04:49.580 EAL: Detected lcore 19 as core 25 on socket 0 00:04:49.580 EAL: Detected lcore 20 as core 26 on socket 0 00:04:49.580 EAL: Detected lcore 21 as core 27 on socket 0 00:04:49.580 EAL: Detected lcore 22 as core 28 on socket 0 00:04:49.580 EAL: Detected lcore 23 as core 29 on socket 0 00:04:49.580 EAL: Detected lcore 24 as core 0 on socket 1 00:04:49.580 EAL: Detected lcore 25 as core 1 on socket 1 00:04:49.580 EAL: Detected lcore 26 as core 2 on socket 1 00:04:49.580 EAL: Detected lcore 27 as core 3 on socket 1 00:04:49.580 EAL: Detected lcore 28 as core 4 on socket 1 00:04:49.580 EAL: Detected lcore 29 as core 5 on socket 1 00:04:49.580 EAL: Detected lcore 30 as core 6 on socket 1 00:04:49.580 EAL: Detected lcore 31 as core 8 on socket 1 00:04:49.580 EAL: Detected lcore 32 as core 9 on socket 1 00:04:49.580 EAL: Detected lcore 33 as core 10 on socket 1 00:04:49.580 EAL: Detected lcore 34 as core 11 on socket 1 00:04:49.580 EAL: Detected lcore 35 as core 12 on socket 1 00:04:49.580 EAL: Detected lcore 36 as core 13 on socket 1 00:04:49.580 EAL: Detected lcore 37 as core 16 on socket 1 00:04:49.580 EAL: Detected lcore 38 as core 17 on socket 1 00:04:49.580 EAL: Detected lcore 39 as core 18 on socket 1 00:04:49.580 EAL: Detected lcore 40 as core 19 on socket 1 00:04:49.580 EAL: Detected lcore 41 as core 20 on socket 1 00:04:49.580 EAL: Detected lcore 42 as core 21 on socket 1 00:04:49.580 EAL: Detected lcore 43 as core 25 on socket 1 00:04:49.580 EAL: Detected lcore 44 as core 26 on socket 1 00:04:49.580 EAL: Detected lcore 45 as core 27 on socket 1 00:04:49.580 EAL: Detected lcore 46 as core 28 on socket 1 00:04:49.580 EAL: Detected lcore 47 as core 29 on socket 1 00:04:49.580 EAL: Detected lcore 48 as core 0 on socket 0 00:04:49.580 EAL: Detected lcore 49 as core 1 on socket 0 00:04:49.580 EAL: Detected lcore 50 as core 2 on socket 0 00:04:49.580 EAL: Detected lcore 51 as core 3 on socket 0 00:04:49.580 EAL: Detected lcore 52 as core 4 on socket 0 00:04:49.580 EAL: Detected lcore 53 as core 5 on socket 0 00:04:49.580 EAL: Detected lcore 54 as core 6 on socket 0 00:04:49.580 EAL: Detected lcore 55 as core 8 on socket 0 00:04:49.580 EAL: Detected lcore 56 as core 9 on socket 0 00:04:49.580 EAL: Detected lcore 57 as core 10 on socket 0 00:04:49.580 EAL: Detected lcore 58 as core 11 on socket 0 00:04:49.580 EAL: Detected lcore 59 as core 12 on socket 0 00:04:49.580 EAL: Detected lcore 60 as core 13 on socket 0 00:04:49.580 EAL: Detected lcore 61 as core 16 on socket 0 00:04:49.580 EAL: Detected lcore 62 as core 17 on socket 0 00:04:49.580 EAL: Detected lcore 63 as core 18 on socket 0 00:04:49.580 EAL: Detected lcore 64 as core 19 on socket 0 00:04:49.580 EAL: Detected lcore 65 as core 20 on socket 0 00:04:49.580 EAL: Detected lcore 66 as core 21 on socket 0 00:04:49.580 EAL: Detected lcore 67 as core 25 on socket 0 00:04:49.580 EAL: Detected lcore 68 as core 26 on socket 0 00:04:49.580 EAL: Detected lcore 69 as core 27 on socket 0 00:04:49.580 EAL: Detected lcore 70 as core 28 on socket 0 00:04:49.580 EAL: Detected lcore 71 as core 29 on socket 0 00:04:49.580 EAL: Detected lcore 72 as core 0 on socket 1 00:04:49.580 EAL: Detected lcore 73 as core 1 on socket 1 00:04:49.580 EAL: Detected lcore 74 as core 2 on socket 1 00:04:49.580 EAL: Detected lcore 75 as core 3 on socket 1 00:04:49.580 EAL: Detected lcore 76 as core 4 on socket 1 00:04:49.580 EAL: Detected lcore 77 as core 5 on socket 1 00:04:49.580 EAL: Detected lcore 78 as core 6 on socket 1 00:04:49.580 EAL: Detected lcore 79 as core 8 on socket 1 00:04:49.580 EAL: Detected lcore 80 as core 9 on socket 1 00:04:49.580 EAL: Detected lcore 81 as core 10 on socket 1 00:04:49.580 EAL: Detected lcore 82 as core 11 on socket 1 00:04:49.580 EAL: Detected lcore 83 as core 12 on socket 1 00:04:49.580 EAL: Detected lcore 84 as core 13 on socket 1 00:04:49.580 EAL: Detected lcore 85 as core 16 on socket 1 00:04:49.581 EAL: Detected lcore 86 as core 17 on socket 1 00:04:49.581 EAL: Detected lcore 87 as core 18 on socket 1 00:04:49.581 EAL: Detected lcore 88 as core 19 on socket 1 00:04:49.581 EAL: Detected lcore 89 as core 20 on socket 1 00:04:49.581 EAL: Detected lcore 90 as core 21 on socket 1 00:04:49.581 EAL: Detected lcore 91 as core 25 on socket 1 00:04:49.581 EAL: Detected lcore 92 as core 26 on socket 1 00:04:49.581 EAL: Detected lcore 93 as core 27 on socket 1 00:04:49.581 EAL: Detected lcore 94 as core 28 on socket 1 00:04:49.581 EAL: Detected lcore 95 as core 29 on socket 1 00:04:49.581 EAL: Maximum logical cores by configuration: 128 00:04:49.581 EAL: Detected CPU lcores: 96 00:04:49.581 EAL: Detected NUMA nodes: 2 00:04:49.581 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:49.581 EAL: Detected shared linkage of DPDK 00:04:49.581 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.581 EAL: Bus pci wants IOVA as 'DC' 00:04:49.581 EAL: Buses did not request a specific IOVA mode. 00:04:49.581 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:49.581 EAL: Selected IOVA mode 'VA' 00:04:49.581 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.581 EAL: Probing VFIO support... 00:04:49.581 EAL: IOMMU type 1 (Type 1) is supported 00:04:49.581 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:49.581 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:49.581 EAL: VFIO support initialized 00:04:49.581 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.581 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.581 EAL: Setting up physically contiguous memory... 00:04:49.581 EAL: Setting maximum number of open files to 524288 00:04:49.581 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.581 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:49.581 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.581 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.581 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.581 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.581 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.581 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.581 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.581 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.581 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.581 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.581 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.581 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.581 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.581 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.581 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.581 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.581 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.581 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.581 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.581 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.581 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.581 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.581 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.581 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.581 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.581 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:49.581 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.581 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:49.581 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.581 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.581 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:49.581 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:49.581 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.581 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:49.581 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.581 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.581 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:49.581 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:49.581 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.581 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:49.581 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.581 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.581 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:49.581 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:49.581 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.581 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:49.581 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.581 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.581 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:49.581 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:49.581 EAL: Hugepages will be freed exactly as allocated. 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: TSC frequency is ~2100000 KHz 00:04:49.581 EAL: Main lcore 0 is ready (tid=7f017b1dda00;cpuset=[0]) 00:04:49.581 EAL: Trying to obtain current memory policy. 00:04:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.581 EAL: Restoring previous memory policy: 0 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.581 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.581 00:04:49.581 00:04:49.581 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.581 http://cunit.sourceforge.net/ 00:04:49.581 00:04:49.581 00:04:49.581 Suite: components_suite 00:04:49.581 Test: vtophys_malloc_test ...passed 00:04:49.581 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.581 EAL: Restoring previous memory policy: 4 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.581 EAL: Trying to obtain current memory policy. 00:04:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.581 EAL: Restoring previous memory policy: 4 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.581 EAL: Trying to obtain current memory policy. 00:04:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.581 EAL: Restoring previous memory policy: 4 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.581 EAL: Trying to obtain current memory policy. 00:04:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.581 EAL: Restoring previous memory policy: 4 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.581 EAL: Trying to obtain current memory policy. 00:04:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.581 EAL: Restoring previous memory policy: 4 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.581 EAL: Trying to obtain current memory policy. 00:04:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.581 EAL: Restoring previous memory policy: 4 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.581 EAL: Trying to obtain current memory policy. 00:04:49.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.581 EAL: Restoring previous memory policy: 4 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.581 EAL: No shared files mode enabled, IPC is disabled 00:04:49.581 EAL: Heap on socket 0 was expanded by 130MB 00:04:49.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.581 EAL: request: mp_malloc_sync 00:04:49.582 EAL: No shared files mode enabled, IPC is disabled 00:04:49.582 EAL: Heap on socket 0 was shrunk by 130MB 00:04:49.582 EAL: Trying to obtain current memory policy. 00:04:49.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.582 EAL: Restoring previous memory policy: 4 00:04:49.582 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.582 EAL: request: mp_malloc_sync 00:04:49.582 EAL: No shared files mode enabled, IPC is disabled 00:04:49.582 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.841 EAL: request: mp_malloc_sync 00:04:49.841 EAL: No shared files mode enabled, IPC is disabled 00:04:49.841 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.841 EAL: Trying to obtain current memory policy. 00:04:49.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.841 EAL: Restoring previous memory policy: 4 00:04:49.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.841 EAL: request: mp_malloc_sync 00:04:49.841 EAL: No shared files mode enabled, IPC is disabled 00:04:49.841 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.100 EAL: request: mp_malloc_sync 00:04:50.100 EAL: No shared files mode enabled, IPC is disabled 00:04:50.100 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.100 EAL: Trying to obtain current memory policy. 00:04:50.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.360 EAL: Restoring previous memory policy: 4 00:04:50.360 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.360 EAL: request: mp_malloc_sync 00:04:50.360 EAL: No shared files mode enabled, IPC is disabled 00:04:50.360 EAL: Heap on socket 0 was expanded by 1026MB 00:04:50.360 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.620 EAL: request: mp_malloc_sync 00:04:50.620 EAL: No shared files mode enabled, IPC is disabled 00:04:50.620 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.620 passed 00:04:50.620 00:04:50.620 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.620 suites 1 1 n/a 0 0 00:04:50.620 tests 2 2 2 0 0 00:04:50.620 asserts 497 497 497 0 n/a 00:04:50.620 00:04:50.620 Elapsed time = 0.968 seconds 00:04:50.620 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.620 EAL: request: mp_malloc_sync 00:04:50.620 EAL: No shared files mode enabled, IPC is disabled 00:04:50.620 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.620 EAL: No shared files mode enabled, IPC is disabled 00:04:50.620 EAL: No shared files mode enabled, IPC is disabled 00:04:50.620 EAL: No shared files mode enabled, IPC is disabled 00:04:50.620 00:04:50.620 real 0m1.094s 00:04:50.620 user 0m0.627s 00:04:50.620 sys 0m0.439s 00:04:50.620 05:02:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.620 05:02:47 -- common/autotest_common.sh@10 -- # set +x 00:04:50.620 ************************************ 00:04:50.620 END TEST env_vtophys 00:04:50.620 ************************************ 00:04:50.620 05:02:47 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.620 05:02:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.620 05:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.620 05:02:47 -- common/autotest_common.sh@10 -- # set +x 00:04:50.620 ************************************ 00:04:50.620 START TEST env_pci 00:04:50.620 ************************************ 00:04:50.620 05:02:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.620 00:04:50.620 00:04:50.620 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.620 http://cunit.sourceforge.net/ 00:04:50.620 00:04:50.620 00:04:50.620 Suite: pci 00:04:50.620 Test: pci_hook ...[2024-11-20 05:02:47.302129] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 100122 has claimed it 00:04:50.620 EAL: Cannot find device (10000:00:01.0) 00:04:50.620 EAL: Failed to attach device on primary process 00:04:50.620 passed 00:04:50.620 00:04:50.620 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.620 suites 1 1 n/a 0 0 00:04:50.620 tests 1 1 1 0 0 00:04:50.620 asserts 25 25 25 0 n/a 00:04:50.620 00:04:50.620 Elapsed time = 0.025 seconds 00:04:50.620 00:04:50.620 real 0m0.044s 00:04:50.620 user 0m0.013s 00:04:50.620 sys 0m0.031s 00:04:50.620 05:02:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.620 05:02:47 -- common/autotest_common.sh@10 -- # set +x 00:04:50.620 ************************************ 00:04:50.620 END TEST env_pci 00:04:50.620 ************************************ 00:04:50.620 05:02:47 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.620 05:02:47 -- env/env.sh@15 -- # uname 00:04:50.620 05:02:47 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.620 05:02:47 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.620 05:02:47 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.620 05:02:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:50.620 05:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.620 05:02:47 -- common/autotest_common.sh@10 -- # set +x 00:04:50.620 ************************************ 00:04:50.620 START TEST env_dpdk_post_init 00:04:50.620 ************************************ 00:04:50.620 05:02:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.620 EAL: Detected CPU lcores: 96 00:04:50.620 EAL: Detected NUMA nodes: 2 00:04:50.620 EAL: Detected shared linkage of DPDK 00:04:50.620 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.620 EAL: Selected IOVA mode 'VA' 00:04:50.620 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.620 EAL: VFIO support initialized 00:04:50.620 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.880 EAL: Using IOMMU type 1 (Type 1) 00:04:50.880 EAL: Ignore mapping IO port bar(1) 00:04:50.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:50.880 EAL: Ignore mapping IO port bar(1) 00:04:50.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:50.880 EAL: Ignore mapping IO port bar(1) 00:04:50.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:50.880 EAL: Ignore mapping IO port bar(1) 00:04:50.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:50.880 EAL: Ignore mapping IO port bar(1) 00:04:50.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:50.880 EAL: Ignore mapping IO port bar(1) 00:04:50.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:50.880 EAL: Ignore mapping IO port bar(1) 00:04:50.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:50.880 EAL: Ignore mapping IO port bar(1) 00:04:50.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:51.820 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:51.820 EAL: Ignore mapping IO port bar(1) 00:04:51.820 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:51.820 EAL: Ignore mapping IO port bar(1) 00:04:51.820 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:51.820 EAL: Ignore mapping IO port bar(1) 00:04:51.820 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:51.820 EAL: Ignore mapping IO port bar(1) 00:04:51.820 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:51.820 EAL: Ignore mapping IO port bar(1) 00:04:51.820 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:51.820 EAL: Ignore mapping IO port bar(1) 00:04:51.820 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:51.820 EAL: Ignore mapping IO port bar(1) 00:04:51.820 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:51.820 EAL: Ignore mapping IO port bar(1) 00:04:51.820 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:55.114 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:55.114 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:55.114 Starting DPDK initialization... 00:04:55.114 Starting SPDK post initialization... 00:04:55.114 SPDK NVMe probe 00:04:55.114 Attaching to 0000:5e:00.0 00:04:55.114 Attached to 0000:5e:00.0 00:04:55.114 Cleaning up... 00:04:55.114 00:04:55.114 real 0m4.350s 00:04:55.114 user 0m3.252s 00:04:55.114 sys 0m0.172s 00:04:55.114 05:02:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.114 05:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:55.114 ************************************ 00:04:55.114 END TEST env_dpdk_post_init 00:04:55.114 ************************************ 00:04:55.114 05:02:51 -- env/env.sh@26 -- # uname 00:04:55.114 05:02:51 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.114 05:02:51 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.114 05:02:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.114 05:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.114 05:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:55.114 ************************************ 00:04:55.114 START TEST env_mem_callbacks 00:04:55.114 ************************************ 00:04:55.114 05:02:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.114 EAL: Detected CPU lcores: 96 00:04:55.114 EAL: Detected NUMA nodes: 2 00:04:55.114 EAL: Detected shared linkage of DPDK 00:04:55.114 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.114 EAL: Selected IOVA mode 'VA' 00:04:55.114 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.114 EAL: VFIO support initialized 00:04:55.114 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.114 00:04:55.114 00:04:55.114 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.114 http://cunit.sourceforge.net/ 00:04:55.114 00:04:55.114 00:04:55.114 Suite: memory 00:04:55.114 Test: test ... 00:04:55.114 register 0x200000200000 2097152 00:04:55.114 malloc 3145728 00:04:55.114 register 0x200000400000 4194304 00:04:55.114 buf 0x200000500000 len 3145728 PASSED 00:04:55.114 malloc 64 00:04:55.114 buf 0x2000004fff40 len 64 PASSED 00:04:55.114 malloc 4194304 00:04:55.114 register 0x200000800000 6291456 00:04:55.114 buf 0x200000a00000 len 4194304 PASSED 00:04:55.114 free 0x200000500000 3145728 00:04:55.114 free 0x2000004fff40 64 00:04:55.114 unregister 0x200000400000 4194304 PASSED 00:04:55.114 free 0x200000a00000 4194304 00:04:55.114 unregister 0x200000800000 6291456 PASSED 00:04:55.114 malloc 8388608 00:04:55.114 register 0x200000400000 10485760 00:04:55.114 buf 0x200000600000 len 8388608 PASSED 00:04:55.114 free 0x200000600000 8388608 00:04:55.114 unregister 0x200000400000 10485760 PASSED 00:04:55.114 passed 00:04:55.114 00:04:55.114 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.114 suites 1 1 n/a 0 0 00:04:55.114 tests 1 1 1 0 0 00:04:55.114 asserts 15 15 15 0 n/a 00:04:55.114 00:04:55.114 Elapsed time = 0.007 seconds 00:04:55.114 00:04:55.114 real 0m0.056s 00:04:55.114 user 0m0.021s 00:04:55.114 sys 0m0.035s 00:04:55.114 05:02:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.114 05:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:55.114 ************************************ 00:04:55.114 END TEST env_mem_callbacks 00:04:55.114 ************************************ 00:04:55.114 00:04:55.114 real 0m6.058s 00:04:55.114 user 0m4.224s 00:04:55.114 sys 0m0.925s 00:04:55.114 05:02:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.114 05:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:55.114 ************************************ 00:04:55.114 END TEST env 00:04:55.114 ************************************ 00:04:55.114 05:02:51 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.114 05:02:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.114 05:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.114 05:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:55.114 ************************************ 00:04:55.114 START TEST rpc 00:04:55.114 ************************************ 00:04:55.114 05:02:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.374 * Looking for test storage... 00:04:55.374 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:55.374 05:02:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:55.374 05:02:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:55.374 05:02:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:55.374 05:02:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:55.374 05:02:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:55.374 05:02:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:55.374 05:02:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:55.374 05:02:52 -- scripts/common.sh@335 -- # IFS=.-: 00:04:55.374 05:02:52 -- scripts/common.sh@335 -- # read -ra ver1 00:04:55.374 05:02:52 -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.374 05:02:52 -- scripts/common.sh@336 -- # read -ra ver2 00:04:55.374 05:02:52 -- scripts/common.sh@337 -- # local 'op=<' 00:04:55.374 05:02:52 -- scripts/common.sh@339 -- # ver1_l=2 00:04:55.374 05:02:52 -- scripts/common.sh@340 -- # ver2_l=1 00:04:55.374 05:02:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:55.374 05:02:52 -- scripts/common.sh@343 -- # case "$op" in 00:04:55.374 05:02:52 -- scripts/common.sh@344 -- # : 1 00:04:55.374 05:02:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:55.374 05:02:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.374 05:02:52 -- scripts/common.sh@364 -- # decimal 1 00:04:55.374 05:02:52 -- scripts/common.sh@352 -- # local d=1 00:04:55.374 05:02:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.374 05:02:52 -- scripts/common.sh@354 -- # echo 1 00:04:55.374 05:02:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:55.374 05:02:52 -- scripts/common.sh@365 -- # decimal 2 00:04:55.374 05:02:52 -- scripts/common.sh@352 -- # local d=2 00:04:55.374 05:02:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.374 05:02:52 -- scripts/common.sh@354 -- # echo 2 00:04:55.374 05:02:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:55.374 05:02:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:55.374 05:02:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:55.374 05:02:52 -- scripts/common.sh@367 -- # return 0 00:04:55.375 05:02:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.375 05:02:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:55.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.375 --rc genhtml_branch_coverage=1 00:04:55.375 --rc genhtml_function_coverage=1 00:04:55.375 --rc genhtml_legend=1 00:04:55.375 --rc geninfo_all_blocks=1 00:04:55.375 --rc geninfo_unexecuted_blocks=1 00:04:55.375 00:04:55.375 ' 00:04:55.375 05:02:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:55.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.375 --rc genhtml_branch_coverage=1 00:04:55.375 --rc genhtml_function_coverage=1 00:04:55.375 --rc genhtml_legend=1 00:04:55.375 --rc geninfo_all_blocks=1 00:04:55.375 --rc geninfo_unexecuted_blocks=1 00:04:55.375 00:04:55.375 ' 00:04:55.375 05:02:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:55.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.375 --rc genhtml_branch_coverage=1 00:04:55.375 --rc genhtml_function_coverage=1 00:04:55.375 --rc genhtml_legend=1 00:04:55.375 --rc geninfo_all_blocks=1 00:04:55.375 --rc geninfo_unexecuted_blocks=1 00:04:55.375 00:04:55.375 ' 00:04:55.375 05:02:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:55.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.375 --rc genhtml_branch_coverage=1 00:04:55.375 --rc genhtml_function_coverage=1 00:04:55.375 --rc genhtml_legend=1 00:04:55.375 --rc geninfo_all_blocks=1 00:04:55.375 --rc geninfo_unexecuted_blocks=1 00:04:55.375 00:04:55.375 ' 00:04:55.375 05:02:52 -- rpc/rpc.sh@65 -- # spdk_pid=100949 00:04:55.375 05:02:52 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.375 05:02:52 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:55.375 05:02:52 -- rpc/rpc.sh@67 -- # waitforlisten 100949 00:04:55.375 05:02:52 -- common/autotest_common.sh@829 -- # '[' -z 100949 ']' 00:04:55.375 05:02:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.375 05:02:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.375 05:02:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.375 05:02:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.375 05:02:52 -- common/autotest_common.sh@10 -- # set +x 00:04:55.375 [2024-11-20 05:02:52.113200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:55.375 [2024-11-20 05:02:52.113246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100949 ] 00:04:55.375 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.375 [2024-11-20 05:02:52.181282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.635 [2024-11-20 05:02:52.256939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.635 [2024-11-20 05:02:52.257042] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:55.635 [2024-11-20 05:02:52.257056] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 100949' to capture a snapshot of events at runtime. 00:04:55.635 [2024-11-20 05:02:52.257063] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid100949 for offline analysis/debug. 00:04:55.635 [2024-11-20 05:02:52.257079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.204 05:02:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.204 05:02:52 -- common/autotest_common.sh@862 -- # return 0 00:04:56.204 05:02:52 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:56.204 05:02:52 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:56.204 05:02:52 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:56.204 05:02:52 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:56.204 05:02:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.204 05:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.204 05:02:52 -- common/autotest_common.sh@10 -- # set +x 00:04:56.204 ************************************ 00:04:56.204 START TEST rpc_integrity 00:04:56.204 ************************************ 00:04:56.204 05:02:52 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:56.204 05:02:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.204 05:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.204 05:02:52 -- common/autotest_common.sh@10 -- # set +x 00:04:56.204 05:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.204 05:02:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.204 05:02:52 -- rpc/rpc.sh@13 -- # jq length 00:04:56.204 05:02:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.204 05:02:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.204 05:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.204 05:02:52 -- common/autotest_common.sh@10 -- # set +x 00:04:56.204 05:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.205 05:02:52 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:56.205 05:02:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.205 05:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.205 05:02:52 -- common/autotest_common.sh@10 -- # set +x 00:04:56.205 05:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.205 05:02:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.205 { 00:04:56.205 "name": "Malloc0", 00:04:56.205 "aliases": [ 00:04:56.205 "ba2160aa-b3f0-44c1-9889-dc2231320d33" 00:04:56.205 ], 00:04:56.205 "product_name": "Malloc disk", 00:04:56.205 "block_size": 512, 00:04:56.205 "num_blocks": 16384, 00:04:56.205 "uuid": "ba2160aa-b3f0-44c1-9889-dc2231320d33", 00:04:56.205 "assigned_rate_limits": { 00:04:56.205 "rw_ios_per_sec": 0, 00:04:56.205 "rw_mbytes_per_sec": 0, 00:04:56.205 "r_mbytes_per_sec": 0, 00:04:56.205 "w_mbytes_per_sec": 0 00:04:56.205 }, 00:04:56.205 "claimed": false, 00:04:56.205 "zoned": false, 00:04:56.205 "supported_io_types": { 00:04:56.205 "read": true, 00:04:56.205 "write": true, 00:04:56.205 "unmap": true, 00:04:56.205 "write_zeroes": true, 00:04:56.205 "flush": true, 00:04:56.205 "reset": true, 00:04:56.205 "compare": false, 00:04:56.205 "compare_and_write": false, 00:04:56.205 "abort": true, 00:04:56.205 "nvme_admin": false, 00:04:56.205 "nvme_io": false 00:04:56.205 }, 00:04:56.205 "memory_domains": [ 00:04:56.205 { 00:04:56.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.205 "dma_device_type": 2 00:04:56.205 } 00:04:56.205 ], 00:04:56.205 "driver_specific": {} 00:04:56.205 } 00:04:56.205 ]' 00:04:56.205 05:02:53 -- rpc/rpc.sh@17 -- # jq length 00:04:56.464 05:02:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.465 05:02:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:56.465 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 [2024-11-20 05:02:53.045396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:56.465 [2024-11-20 05:02:53.045427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.465 [2024-11-20 05:02:53.045438] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6eda90 00:04:56.465 [2024-11-20 05:02:53.045444] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.465 [2024-11-20 05:02:53.046537] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.465 [2024-11-20 05:02:53.046556] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.465 Passthru0 00:04:56.465 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.465 05:02:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.465 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.465 05:02:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.465 { 00:04:56.465 "name": "Malloc0", 00:04:56.465 "aliases": [ 00:04:56.465 "ba2160aa-b3f0-44c1-9889-dc2231320d33" 00:04:56.465 ], 00:04:56.465 "product_name": "Malloc disk", 00:04:56.465 "block_size": 512, 00:04:56.465 "num_blocks": 16384, 00:04:56.465 "uuid": "ba2160aa-b3f0-44c1-9889-dc2231320d33", 00:04:56.465 "assigned_rate_limits": { 00:04:56.465 "rw_ios_per_sec": 0, 00:04:56.465 "rw_mbytes_per_sec": 0, 00:04:56.465 "r_mbytes_per_sec": 0, 00:04:56.465 "w_mbytes_per_sec": 0 00:04:56.465 }, 00:04:56.465 "claimed": true, 00:04:56.465 "claim_type": "exclusive_write", 00:04:56.465 "zoned": false, 00:04:56.465 "supported_io_types": { 00:04:56.465 "read": true, 00:04:56.465 "write": true, 00:04:56.465 "unmap": true, 00:04:56.465 "write_zeroes": true, 00:04:56.465 "flush": true, 00:04:56.465 "reset": true, 00:04:56.465 "compare": false, 00:04:56.465 "compare_and_write": false, 00:04:56.465 "abort": true, 00:04:56.465 "nvme_admin": false, 00:04:56.465 "nvme_io": false 00:04:56.465 }, 00:04:56.465 "memory_domains": [ 00:04:56.465 { 00:04:56.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.465 "dma_device_type": 2 00:04:56.465 } 00:04:56.465 ], 00:04:56.465 "driver_specific": {} 00:04:56.465 }, 00:04:56.465 { 00:04:56.465 "name": "Passthru0", 00:04:56.465 "aliases": [ 00:04:56.465 "cffba462-f582-56c2-b7fe-1812e8ca9f0d" 00:04:56.465 ], 00:04:56.465 "product_name": "passthru", 00:04:56.465 "block_size": 512, 00:04:56.465 "num_blocks": 16384, 00:04:56.465 "uuid": "cffba462-f582-56c2-b7fe-1812e8ca9f0d", 00:04:56.465 "assigned_rate_limits": { 00:04:56.465 "rw_ios_per_sec": 0, 00:04:56.465 "rw_mbytes_per_sec": 0, 00:04:56.465 "r_mbytes_per_sec": 0, 00:04:56.465 "w_mbytes_per_sec": 0 00:04:56.465 }, 00:04:56.465 "claimed": false, 00:04:56.465 "zoned": false, 00:04:56.465 "supported_io_types": { 00:04:56.465 "read": true, 00:04:56.465 "write": true, 00:04:56.465 "unmap": true, 00:04:56.465 "write_zeroes": true, 00:04:56.465 "flush": true, 00:04:56.465 "reset": true, 00:04:56.465 "compare": false, 00:04:56.465 "compare_and_write": false, 00:04:56.465 "abort": true, 00:04:56.465 "nvme_admin": false, 00:04:56.465 "nvme_io": false 00:04:56.465 }, 00:04:56.465 "memory_domains": [ 00:04:56.465 { 00:04:56.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.465 "dma_device_type": 2 00:04:56.465 } 00:04:56.465 ], 00:04:56.465 "driver_specific": { 00:04:56.465 "passthru": { 00:04:56.465 "name": "Passthru0", 00:04:56.465 "base_bdev_name": "Malloc0" 00:04:56.465 } 00:04:56.465 } 00:04:56.465 } 00:04:56.465 ]' 00:04:56.465 05:02:53 -- rpc/rpc.sh@21 -- # jq length 00:04:56.465 05:02:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.465 05:02:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.465 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.465 05:02:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:56.465 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.465 05:02:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.465 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.465 05:02:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.465 05:02:53 -- rpc/rpc.sh@26 -- # jq length 00:04:56.465 05:02:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.465 00:04:56.465 real 0m0.265s 00:04:56.465 user 0m0.161s 00:04:56.465 sys 0m0.037s 00:04:56.465 05:02:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 ************************************ 00:04:56.465 END TEST rpc_integrity 00:04:56.465 ************************************ 00:04:56.465 05:02:53 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:56.465 05:02:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.465 05:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 ************************************ 00:04:56.465 START TEST rpc_plugins 00:04:56.465 ************************************ 00:04:56.465 05:02:53 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:56.465 05:02:53 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:56.465 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.465 05:02:53 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:56.465 05:02:53 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:56.465 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.465 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.465 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.465 05:02:53 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:56.465 { 00:04:56.465 "name": "Malloc1", 00:04:56.465 "aliases": [ 00:04:56.465 "1267a62b-0973-4c1b-a6df-c26746ac09db" 00:04:56.465 ], 00:04:56.465 "product_name": "Malloc disk", 00:04:56.465 "block_size": 4096, 00:04:56.465 "num_blocks": 256, 00:04:56.465 "uuid": "1267a62b-0973-4c1b-a6df-c26746ac09db", 00:04:56.465 "assigned_rate_limits": { 00:04:56.465 "rw_ios_per_sec": 0, 00:04:56.465 "rw_mbytes_per_sec": 0, 00:04:56.465 "r_mbytes_per_sec": 0, 00:04:56.465 "w_mbytes_per_sec": 0 00:04:56.465 }, 00:04:56.465 "claimed": false, 00:04:56.465 "zoned": false, 00:04:56.465 "supported_io_types": { 00:04:56.465 "read": true, 00:04:56.465 "write": true, 00:04:56.465 "unmap": true, 00:04:56.465 "write_zeroes": true, 00:04:56.465 "flush": true, 00:04:56.465 "reset": true, 00:04:56.465 "compare": false, 00:04:56.465 "compare_and_write": false, 00:04:56.465 "abort": true, 00:04:56.465 "nvme_admin": false, 00:04:56.465 "nvme_io": false 00:04:56.465 }, 00:04:56.465 "memory_domains": [ 00:04:56.465 { 00:04:56.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.465 "dma_device_type": 2 00:04:56.465 } 00:04:56.465 ], 00:04:56.465 "driver_specific": {} 00:04:56.465 } 00:04:56.465 ]' 00:04:56.465 05:02:53 -- rpc/rpc.sh@32 -- # jq length 00:04:56.725 05:02:53 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:56.725 05:02:53 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:56.725 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.725 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.725 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.725 05:02:53 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:56.725 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.725 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.725 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.725 05:02:53 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:56.725 05:02:53 -- rpc/rpc.sh@36 -- # jq length 00:04:56.725 05:02:53 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:56.725 00:04:56.725 real 0m0.139s 00:04:56.725 user 0m0.082s 00:04:56.725 sys 0m0.019s 00:04:56.725 05:02:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.725 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.725 ************************************ 00:04:56.725 END TEST rpc_plugins 00:04:56.725 ************************************ 00:04:56.725 05:02:53 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:56.725 05:02:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.725 05:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.725 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.725 ************************************ 00:04:56.725 START TEST rpc_trace_cmd_test 00:04:56.725 ************************************ 00:04:56.725 05:02:53 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:56.725 05:02:53 -- rpc/rpc.sh@40 -- # local info 00:04:56.725 05:02:53 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:56.725 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.725 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.725 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.725 05:02:53 -- rpc/rpc.sh@42 -- # info='{ 00:04:56.725 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid100949", 00:04:56.725 "tpoint_group_mask": "0x8", 00:04:56.725 "iscsi_conn": { 00:04:56.725 "mask": "0x2", 00:04:56.725 "tpoint_mask": "0x0" 00:04:56.725 }, 00:04:56.725 "scsi": { 00:04:56.725 "mask": "0x4", 00:04:56.725 "tpoint_mask": "0x0" 00:04:56.725 }, 00:04:56.725 "bdev": { 00:04:56.725 "mask": "0x8", 00:04:56.725 "tpoint_mask": "0xffffffffffffffff" 00:04:56.725 }, 00:04:56.725 "nvmf_rdma": { 00:04:56.725 "mask": "0x10", 00:04:56.725 "tpoint_mask": "0x0" 00:04:56.725 }, 00:04:56.725 "nvmf_tcp": { 00:04:56.725 "mask": "0x20", 00:04:56.725 "tpoint_mask": "0x0" 00:04:56.725 }, 00:04:56.725 "ftl": { 00:04:56.725 "mask": "0x40", 00:04:56.725 "tpoint_mask": "0x0" 00:04:56.725 }, 00:04:56.725 "blobfs": { 00:04:56.725 "mask": "0x80", 00:04:56.725 "tpoint_mask": "0x0" 00:04:56.725 }, 00:04:56.725 "dsa": { 00:04:56.725 "mask": "0x200", 00:04:56.725 "tpoint_mask": "0x0" 00:04:56.726 }, 00:04:56.726 "thread": { 00:04:56.726 "mask": "0x400", 00:04:56.726 "tpoint_mask": "0x0" 00:04:56.726 }, 00:04:56.726 "nvme_pcie": { 00:04:56.726 "mask": "0x800", 00:04:56.726 "tpoint_mask": "0x0" 00:04:56.726 }, 00:04:56.726 "iaa": { 00:04:56.726 "mask": "0x1000", 00:04:56.726 "tpoint_mask": "0x0" 00:04:56.726 }, 00:04:56.726 "nvme_tcp": { 00:04:56.726 "mask": "0x2000", 00:04:56.726 "tpoint_mask": "0x0" 00:04:56.726 }, 00:04:56.726 "bdev_nvme": { 00:04:56.726 "mask": "0x4000", 00:04:56.726 "tpoint_mask": "0x0" 00:04:56.726 } 00:04:56.726 }' 00:04:56.726 05:02:53 -- rpc/rpc.sh@43 -- # jq length 00:04:56.726 05:02:53 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:56.726 05:02:53 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:56.726 05:02:53 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:56.726 05:02:53 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:56.985 05:02:53 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:56.985 05:02:53 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:56.985 05:02:53 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:56.985 05:02:53 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:56.985 05:02:53 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:56.985 00:04:56.985 real 0m0.223s 00:04:56.985 user 0m0.183s 00:04:56.985 sys 0m0.030s 00:04:56.985 05:02:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.985 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.985 ************************************ 00:04:56.985 END TEST rpc_trace_cmd_test 00:04:56.985 ************************************ 00:04:56.985 05:02:53 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:56.985 05:02:53 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:56.985 05:02:53 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:56.985 05:02:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.985 05:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.985 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.985 ************************************ 00:04:56.985 START TEST rpc_daemon_integrity 00:04:56.985 ************************************ 00:04:56.985 05:02:53 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:56.985 05:02:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.985 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.985 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.985 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.985 05:02:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.986 05:02:53 -- rpc/rpc.sh@13 -- # jq length 00:04:56.986 05:02:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.986 05:02:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.986 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.986 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.986 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.986 05:02:53 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:56.986 05:02:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.986 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.986 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.986 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.986 05:02:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.986 { 00:04:56.986 "name": "Malloc2", 00:04:56.986 "aliases": [ 00:04:56.986 "527f120f-f43f-426b-b82c-7e3d369aed72" 00:04:56.986 ], 00:04:56.986 "product_name": "Malloc disk", 00:04:56.986 "block_size": 512, 00:04:56.986 "num_blocks": 16384, 00:04:56.986 "uuid": "527f120f-f43f-426b-b82c-7e3d369aed72", 00:04:56.986 "assigned_rate_limits": { 00:04:56.986 "rw_ios_per_sec": 0, 00:04:56.986 "rw_mbytes_per_sec": 0, 00:04:56.986 "r_mbytes_per_sec": 0, 00:04:56.986 "w_mbytes_per_sec": 0 00:04:56.986 }, 00:04:56.986 "claimed": false, 00:04:56.986 "zoned": false, 00:04:56.986 "supported_io_types": { 00:04:56.986 "read": true, 00:04:56.986 "write": true, 00:04:56.986 "unmap": true, 00:04:56.986 "write_zeroes": true, 00:04:56.986 "flush": true, 00:04:56.986 "reset": true, 00:04:56.986 "compare": false, 00:04:56.986 "compare_and_write": false, 00:04:56.986 "abort": true, 00:04:56.986 "nvme_admin": false, 00:04:56.986 "nvme_io": false 00:04:56.986 }, 00:04:56.986 "memory_domains": [ 00:04:56.986 { 00:04:56.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.986 "dma_device_type": 2 00:04:56.986 } 00:04:56.986 ], 00:04:56.986 "driver_specific": {} 00:04:56.986 } 00:04:56.986 ]' 00:04:56.986 05:02:53 -- rpc/rpc.sh@17 -- # jq length 00:04:56.986 05:02:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.986 05:02:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:56.986 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.986 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.986 [2024-11-20 05:02:53.803461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:56.986 [2024-11-20 05:02:53.803487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.986 [2024-11-20 05:02:53.803502] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x88d770 00:04:56.986 [2024-11-20 05:02:53.803508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.986 [2024-11-20 05:02:53.804459] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.986 [2024-11-20 05:02:53.804478] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.986 Passthru0 00:04:56.986 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.986 05:02:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.986 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.986 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.246 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.246 05:02:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.246 { 00:04:57.246 "name": "Malloc2", 00:04:57.246 "aliases": [ 00:04:57.246 "527f120f-f43f-426b-b82c-7e3d369aed72" 00:04:57.246 ], 00:04:57.246 "product_name": "Malloc disk", 00:04:57.246 "block_size": 512, 00:04:57.246 "num_blocks": 16384, 00:04:57.246 "uuid": "527f120f-f43f-426b-b82c-7e3d369aed72", 00:04:57.246 "assigned_rate_limits": { 00:04:57.246 "rw_ios_per_sec": 0, 00:04:57.246 "rw_mbytes_per_sec": 0, 00:04:57.246 "r_mbytes_per_sec": 0, 00:04:57.246 "w_mbytes_per_sec": 0 00:04:57.246 }, 00:04:57.246 "claimed": true, 00:04:57.246 "claim_type": "exclusive_write", 00:04:57.246 "zoned": false, 00:04:57.246 "supported_io_types": { 00:04:57.246 "read": true, 00:04:57.246 "write": true, 00:04:57.246 "unmap": true, 00:04:57.246 "write_zeroes": true, 00:04:57.246 "flush": true, 00:04:57.246 "reset": true, 00:04:57.246 "compare": false, 00:04:57.246 "compare_and_write": false, 00:04:57.246 "abort": true, 00:04:57.246 "nvme_admin": false, 00:04:57.246 "nvme_io": false 00:04:57.246 }, 00:04:57.246 "memory_domains": [ 00:04:57.246 { 00:04:57.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.246 "dma_device_type": 2 00:04:57.246 } 00:04:57.246 ], 00:04:57.246 "driver_specific": {} 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "name": "Passthru0", 00:04:57.246 "aliases": [ 00:04:57.246 "63e4b00e-7637-56fc-b69f-5be5c7668c0f" 00:04:57.246 ], 00:04:57.246 "product_name": "passthru", 00:04:57.246 "block_size": 512, 00:04:57.246 "num_blocks": 16384, 00:04:57.246 "uuid": "63e4b00e-7637-56fc-b69f-5be5c7668c0f", 00:04:57.246 "assigned_rate_limits": { 00:04:57.246 "rw_ios_per_sec": 0, 00:04:57.246 "rw_mbytes_per_sec": 0, 00:04:57.246 "r_mbytes_per_sec": 0, 00:04:57.246 "w_mbytes_per_sec": 0 00:04:57.246 }, 00:04:57.246 "claimed": false, 00:04:57.246 "zoned": false, 00:04:57.246 "supported_io_types": { 00:04:57.246 "read": true, 00:04:57.246 "write": true, 00:04:57.246 "unmap": true, 00:04:57.246 "write_zeroes": true, 00:04:57.246 "flush": true, 00:04:57.246 "reset": true, 00:04:57.246 "compare": false, 00:04:57.246 "compare_and_write": false, 00:04:57.246 "abort": true, 00:04:57.246 "nvme_admin": false, 00:04:57.246 "nvme_io": false 00:04:57.246 }, 00:04:57.246 "memory_domains": [ 00:04:57.246 { 00:04:57.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.246 "dma_device_type": 2 00:04:57.246 } 00:04:57.246 ], 00:04:57.246 "driver_specific": { 00:04:57.246 "passthru": { 00:04:57.246 "name": "Passthru0", 00:04:57.246 "base_bdev_name": "Malloc2" 00:04:57.246 } 00:04:57.246 } 00:04:57.246 } 00:04:57.246 ]' 00:04:57.246 05:02:53 -- rpc/rpc.sh@21 -- # jq length 00:04:57.246 05:02:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.246 05:02:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.246 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.246 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.246 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.246 05:02:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:57.246 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.246 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.246 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.246 05:02:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.246 05:02:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.246 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.246 05:02:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.246 05:02:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.246 05:02:53 -- rpc/rpc.sh@26 -- # jq length 00:04:57.246 05:02:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.246 00:04:57.246 real 0m0.270s 00:04:57.246 user 0m0.166s 00:04:57.246 sys 0m0.036s 00:04:57.246 05:02:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.246 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.246 ************************************ 00:04:57.246 END TEST rpc_daemon_integrity 00:04:57.246 ************************************ 00:04:57.246 05:02:53 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:57.246 05:02:53 -- rpc/rpc.sh@84 -- # killprocess 100949 00:04:57.246 05:02:53 -- common/autotest_common.sh@936 -- # '[' -z 100949 ']' 00:04:57.246 05:02:53 -- common/autotest_common.sh@940 -- # kill -0 100949 00:04:57.246 05:02:53 -- common/autotest_common.sh@941 -- # uname 00:04:57.246 05:02:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.246 05:02:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100949 00:04:57.246 05:02:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.246 05:02:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.246 05:02:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100949' 00:04:57.246 killing process with pid 100949 00:04:57.246 05:02:54 -- common/autotest_common.sh@955 -- # kill 100949 00:04:57.246 05:02:54 -- common/autotest_common.sh@960 -- # wait 100949 00:04:57.816 00:04:57.816 real 0m2.459s 00:04:57.816 user 0m3.115s 00:04:57.816 sys 0m0.630s 00:04:57.816 05:02:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.816 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.816 ************************************ 00:04:57.816 END TEST rpc 00:04:57.816 ************************************ 00:04:57.816 05:02:54 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.816 05:02:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.816 05:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.816 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.816 ************************************ 00:04:57.816 START TEST rpc_client 00:04:57.816 ************************************ 00:04:57.816 05:02:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.816 * Looking for test storage... 00:04:57.816 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client 00:04:57.816 05:02:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.816 05:02:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.816 05:02:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.816 05:02:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.816 05:02:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.816 05:02:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.816 05:02:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.816 05:02:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.816 05:02:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.816 05:02:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.816 05:02:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.816 05:02:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.816 05:02:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.816 05:02:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.816 05:02:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.816 05:02:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.816 05:02:54 -- scripts/common.sh@344 -- # : 1 00:04:57.816 05:02:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.816 05:02:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.816 05:02:54 -- scripts/common.sh@364 -- # decimal 1 00:04:57.816 05:02:54 -- scripts/common.sh@352 -- # local d=1 00:04:57.816 05:02:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.816 05:02:54 -- scripts/common.sh@354 -- # echo 1 00:04:57.816 05:02:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.816 05:02:54 -- scripts/common.sh@365 -- # decimal 2 00:04:57.816 05:02:54 -- scripts/common.sh@352 -- # local d=2 00:04:57.816 05:02:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.816 05:02:54 -- scripts/common.sh@354 -- # echo 2 00:04:57.816 05:02:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.816 05:02:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.816 05:02:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.816 05:02:54 -- scripts/common.sh@367 -- # return 0 00:04:57.816 05:02:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.816 05:02:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.816 --rc genhtml_branch_coverage=1 00:04:57.816 --rc genhtml_function_coverage=1 00:04:57.816 --rc genhtml_legend=1 00:04:57.816 --rc geninfo_all_blocks=1 00:04:57.816 --rc geninfo_unexecuted_blocks=1 00:04:57.816 00:04:57.816 ' 00:04:57.816 05:02:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.816 --rc genhtml_branch_coverage=1 00:04:57.816 --rc genhtml_function_coverage=1 00:04:57.816 --rc genhtml_legend=1 00:04:57.816 --rc geninfo_all_blocks=1 00:04:57.816 --rc geninfo_unexecuted_blocks=1 00:04:57.816 00:04:57.816 ' 00:04:57.816 05:02:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.816 --rc genhtml_branch_coverage=1 00:04:57.816 --rc genhtml_function_coverage=1 00:04:57.816 --rc genhtml_legend=1 00:04:57.816 --rc geninfo_all_blocks=1 00:04:57.816 --rc geninfo_unexecuted_blocks=1 00:04:57.816 00:04:57.816 ' 00:04:57.816 05:02:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.816 --rc genhtml_branch_coverage=1 00:04:57.816 --rc genhtml_function_coverage=1 00:04:57.816 --rc genhtml_legend=1 00:04:57.816 --rc geninfo_all_blocks=1 00:04:57.816 --rc geninfo_unexecuted_blocks=1 00:04:57.816 00:04:57.816 ' 00:04:57.816 05:02:54 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:57.816 OK 00:04:57.816 05:02:54 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.816 00:04:57.816 real 0m0.194s 00:04:57.816 user 0m0.119s 00:04:57.816 sys 0m0.089s 00:04:57.816 05:02:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.816 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.816 ************************************ 00:04:57.816 END TEST rpc_client 00:04:57.816 ************************************ 00:04:57.816 05:02:54 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.816 05:02:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.816 05:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.816 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.816 ************************************ 00:04:57.816 START TEST json_config 00:04:57.816 ************************************ 00:04:57.816 05:02:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:58.078 05:02:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:58.078 05:02:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:58.078 05:02:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:58.078 05:02:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:58.078 05:02:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:58.078 05:02:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:58.078 05:02:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:58.078 05:02:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:58.078 05:02:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:58.078 05:02:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.078 05:02:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:58.078 05:02:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:58.078 05:02:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:58.078 05:02:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:58.078 05:02:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:58.078 05:02:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:58.078 05:02:54 -- scripts/common.sh@344 -- # : 1 00:04:58.078 05:02:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:58.078 05:02:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.078 05:02:54 -- scripts/common.sh@364 -- # decimal 1 00:04:58.078 05:02:54 -- scripts/common.sh@352 -- # local d=1 00:04:58.078 05:02:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.078 05:02:54 -- scripts/common.sh@354 -- # echo 1 00:04:58.078 05:02:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:58.078 05:02:54 -- scripts/common.sh@365 -- # decimal 2 00:04:58.078 05:02:54 -- scripts/common.sh@352 -- # local d=2 00:04:58.078 05:02:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.078 05:02:54 -- scripts/common.sh@354 -- # echo 2 00:04:58.078 05:02:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:58.078 05:02:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:58.078 05:02:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:58.078 05:02:54 -- scripts/common.sh@367 -- # return 0 00:04:58.078 05:02:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.078 05:02:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.078 --rc genhtml_branch_coverage=1 00:04:58.078 --rc genhtml_function_coverage=1 00:04:58.078 --rc genhtml_legend=1 00:04:58.078 --rc geninfo_all_blocks=1 00:04:58.078 --rc geninfo_unexecuted_blocks=1 00:04:58.078 00:04:58.078 ' 00:04:58.078 05:02:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.078 --rc genhtml_branch_coverage=1 00:04:58.078 --rc genhtml_function_coverage=1 00:04:58.078 --rc genhtml_legend=1 00:04:58.078 --rc geninfo_all_blocks=1 00:04:58.078 --rc geninfo_unexecuted_blocks=1 00:04:58.078 00:04:58.078 ' 00:04:58.078 05:02:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.078 --rc genhtml_branch_coverage=1 00:04:58.078 --rc genhtml_function_coverage=1 00:04:58.078 --rc genhtml_legend=1 00:04:58.078 --rc geninfo_all_blocks=1 00:04:58.078 --rc geninfo_unexecuted_blocks=1 00:04:58.078 00:04:58.078 ' 00:04:58.078 05:02:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.078 --rc genhtml_branch_coverage=1 00:04:58.078 --rc genhtml_function_coverage=1 00:04:58.078 --rc genhtml_legend=1 00:04:58.078 --rc geninfo_all_blocks=1 00:04:58.078 --rc geninfo_unexecuted_blocks=1 00:04:58.078 00:04:58.078 ' 00:04:58.078 05:02:54 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.078 05:02:54 -- nvmf/common.sh@7 -- # uname -s 00:04:58.078 05:02:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.078 05:02:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.078 05:02:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.078 05:02:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.078 05:02:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.078 05:02:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.078 05:02:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.078 05:02:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.078 05:02:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.078 05:02:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.078 05:02:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:58.078 05:02:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:58.078 05:02:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.079 05:02:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.079 05:02:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.079 05:02:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:04:58.079 05:02:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.079 05:02:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.079 05:02:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.079 05:02:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.079 05:02:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.079 05:02:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.079 05:02:54 -- paths/export.sh@5 -- # export PATH 00:04:58.079 05:02:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.079 05:02:54 -- nvmf/common.sh@46 -- # : 0 00:04:58.079 05:02:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:58.079 05:02:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:58.079 05:02:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:58.079 05:02:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.079 05:02:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.079 05:02:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:58.079 05:02:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:58.079 05:02:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:58.079 05:02:54 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:58.079 05:02:54 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:58.079 05:02:54 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:58.079 05:02:54 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.079 05:02:54 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:58.079 05:02:54 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:58.079 05:02:54 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:58.079 05:02:54 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:58.079 05:02:54 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:58.079 05:02:54 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:58.079 05:02:54 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json') 00:04:58.079 05:02:54 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:58.079 05:02:54 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:58.079 05:02:54 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.079 05:02:54 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:58.079 INFO: JSON configuration test init 00:04:58.079 05:02:54 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:58.079 05:02:54 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:58.079 05:02:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.079 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:04:58.079 05:02:54 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:58.079 05:02:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.079 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:04:58.079 05:02:54 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:58.079 05:02:54 -- json_config/json_config.sh@98 -- # local app=target 00:04:58.079 05:02:54 -- json_config/json_config.sh@99 -- # shift 00:04:58.079 05:02:54 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:58.079 05:02:54 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:58.079 05:02:54 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:58.079 05:02:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:58.079 05:02:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:58.079 05:02:54 -- json_config/json_config.sh@111 -- # app_pid[$app]=101632 00:04:58.079 05:02:54 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:58.079 Waiting for target to run... 00:04:58.079 05:02:54 -- json_config/json_config.sh@114 -- # waitforlisten 101632 /var/tmp/spdk_tgt.sock 00:04:58.079 05:02:54 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:58.079 05:02:54 -- common/autotest_common.sh@829 -- # '[' -z 101632 ']' 00:04:58.079 05:02:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.079 05:02:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.079 05:02:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.079 05:02:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.079 05:02:54 -- common/autotest_common.sh@10 -- # set +x 00:04:58.079 [2024-11-20 05:02:54.865225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:58.079 [2024-11-20 05:02:54.865269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101632 ] 00:04:58.079 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.648 [2024-11-20 05:02:55.325829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.648 [2024-11-20 05:02:55.409993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.648 [2024-11-20 05:02:55.410111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.908 05:02:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.908 05:02:55 -- common/autotest_common.sh@862 -- # return 0 00:04:58.908 05:02:55 -- json_config/json_config.sh@115 -- # echo '' 00:04:58.908 00:04:58.908 05:02:55 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:58.908 05:02:55 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:58.908 05:02:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.908 05:02:55 -- common/autotest_common.sh@10 -- # set +x 00:04:58.908 05:02:55 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:58.908 05:02:55 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:58.908 05:02:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.908 05:02:55 -- common/autotest_common.sh@10 -- # set +x 00:04:58.908 05:02:55 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:58.908 05:02:55 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:58.908 05:02:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:02.206 05:02:58 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:02.206 05:02:58 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:02.206 05:02:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.206 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.206 05:02:58 -- json_config/json_config.sh@48 -- # local ret=0 00:05:02.206 05:02:58 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.206 05:02:58 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:02.206 05:02:58 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:02.206 05:02:58 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:02.206 05:02:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.206 05:02:58 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:02.206 05:02:58 -- json_config/json_config.sh@51 -- # local get_types 00:05:02.206 05:02:58 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:02.206 05:02:58 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:02.206 05:02:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.206 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.206 05:02:58 -- json_config/json_config.sh@58 -- # return 0 00:05:02.206 05:02:58 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:02.206 05:02:58 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:02.206 05:02:58 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:02.206 05:02:58 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:02.206 05:02:58 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:02.206 05:02:58 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:02.206 05:02:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.206 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.206 05:02:59 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:02.206 05:02:59 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:02.206 05:02:59 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:02.206 05:02:59 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:02.206 05:02:59 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:02.206 05:02:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:02.206 05:02:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:02.206 05:02:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:02.206 05:02:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:02.206 05:02:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.206 05:02:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:02.206 05:02:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:02.206 05:02:59 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:02.206 05:02:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:02.206 05:02:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:02.206 05:02:59 -- common/autotest_common.sh@10 -- # set +x 00:05:08.783 05:03:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:08.783 05:03:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:08.783 05:03:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:08.783 05:03:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:08.783 05:03:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:08.783 05:03:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:08.783 05:03:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:08.783 05:03:04 -- nvmf/common.sh@294 -- # net_devs=() 00:05:08.783 05:03:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:08.783 05:03:04 -- nvmf/common.sh@295 -- # e810=() 00:05:08.783 05:03:04 -- nvmf/common.sh@295 -- # local -ga e810 00:05:08.783 05:03:04 -- nvmf/common.sh@296 -- # x722=() 00:05:08.783 05:03:04 -- nvmf/common.sh@296 -- # local -ga x722 00:05:08.783 05:03:04 -- nvmf/common.sh@297 -- # mlx=() 00:05:08.783 05:03:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:08.783 05:03:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:08.783 05:03:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:08.783 05:03:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:08.783 05:03:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:08.783 05:03:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:05:08.783 05:03:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:08.783 05:03:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:08.783 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:08.783 05:03:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:08.783 05:03:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:08.783 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:08.783 05:03:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:08.783 05:03:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:08.783 05:03:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@374 -- # (( 0 != 1 )) 00:05:08.783 05:03:04 -- nvmf/common.sh@374 -- # modprobe -r irdma 00:05:08.783 05:03:04 -- nvmf/common.sh@376 -- # modinfo irdma 00:05:08.783 05:03:04 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:05:08.783 05:03:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:08.783 05:03:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:08.783 05:03:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:08.783 05:03:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:08.783 Found net devices under 0000:af:00.0: cvl_0_0 00:05:08.783 05:03:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:08.783 05:03:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:08.783 05:03:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:08.783 05:03:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:08.783 05:03:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:08.783 Found net devices under 0000:af:00.1: cvl_0_1 00:05:08.783 05:03:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:08.783 05:03:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:08.783 05:03:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:08.783 05:03:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:08.783 05:03:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:08.783 05:03:04 -- nvmf/common.sh@57 -- # uname 00:05:08.783 05:03:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:08.783 05:03:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:08.783 05:03:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:08.783 05:03:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:08.783 05:03:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:08.783 05:03:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:08.783 05:03:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:08.783 05:03:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:08.783 05:03:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:08.783 05:03:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:08.783 05:03:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:08.783 05:03:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:08.783 05:03:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:08.783 05:03:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:08.783 05:03:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:08.783 05:03:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:08.783 05:03:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:05:08.783 05:03:04 -- nvmf/common.sh@104 -- # continue 2 00:05:08.783 05:03:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:08.783 05:03:04 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:05:08.783 05:03:04 -- nvmf/common.sh@104 -- # continue 2 00:05:08.783 05:03:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:08.783 05:03:04 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:05:08.783 05:03:04 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:05:08.783 05:03:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:05:08.783 05:03:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:08.783 05:03:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:08.783 05:03:04 -- nvmf/common.sh@73 -- # ip= 00:05:08.783 05:03:04 -- nvmf/common.sh@74 -- # [[ -z '' ]] 00:05:08.783 05:03:04 -- nvmf/common.sh@75 -- # ip addr add 192.168.100.8/24 dev cvl_0_0 00:05:08.783 05:03:04 -- nvmf/common.sh@76 -- # ip link set cvl_0_0 up 00:05:08.783 05:03:04 -- nvmf/common.sh@77 -- # (( count = count + 1 )) 00:05:08.783 05:03:04 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:05:08.783 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:05:08.783 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:05:08.783 altname enp175s0f0np0 00:05:08.784 altname ens801f0np0 00:05:08.784 inet 192.168.100.8/24 scope global cvl_0_0 00:05:08.784 valid_lft forever preferred_lft forever 00:05:08.784 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:05:08.784 valid_lft forever preferred_lft forever 00:05:08.784 05:03:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:08.784 05:03:04 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:05:08.784 05:03:04 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:08.784 05:03:04 -- nvmf/common.sh@73 -- # ip= 00:05:08.784 05:03:04 -- nvmf/common.sh@74 -- # [[ -z '' ]] 00:05:08.784 05:03:04 -- nvmf/common.sh@75 -- # ip addr add 192.168.100.9/24 dev cvl_0_1 00:05:08.784 05:03:04 -- nvmf/common.sh@76 -- # ip link set cvl_0_1 up 00:05:08.784 05:03:04 -- nvmf/common.sh@77 -- # (( count = count + 1 )) 00:05:08.784 05:03:04 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:05:08.784 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:05:08.784 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:05:08.784 altname enp175s0f1np1 00:05:08.784 altname ens801f1np1 00:05:08.784 inet 192.168.100.9/24 scope global cvl_0_1 00:05:08.784 valid_lft forever preferred_lft forever 00:05:08.784 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:05:08.784 valid_lft forever preferred_lft forever 00:05:08.784 05:03:04 -- nvmf/common.sh@410 -- # return 0 00:05:08.784 05:03:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:08.784 05:03:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:08.784 05:03:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:08.784 05:03:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:08.784 05:03:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:08.784 05:03:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:08.784 05:03:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:08.784 05:03:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:08.784 05:03:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:08.784 05:03:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:08.784 05:03:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:08.784 05:03:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:08.784 05:03:04 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:05:08.784 05:03:04 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:05:08.784 05:03:04 -- nvmf/common.sh@104 -- # continue 2 00:05:08.784 05:03:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:08.784 05:03:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:08.784 05:03:04 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:05:08.784 05:03:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:08.784 05:03:04 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:05:08.784 05:03:04 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:05:08.784 05:03:04 -- nvmf/common.sh@104 -- # continue 2 00:05:08.784 05:03:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:08.784 05:03:04 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:05:08.784 05:03:04 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:08.784 05:03:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:08.784 05:03:04 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:05:08.784 05:03:04 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:08.784 05:03:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:08.784 05:03:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:08.784 192.168.100.9' 00:05:08.784 05:03:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:08.784 192.168.100.9' 00:05:08.784 05:03:05 -- nvmf/common.sh@445 -- # head -n 1 00:05:08.784 05:03:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:08.784 05:03:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:08.784 192.168.100.9' 00:05:08.784 05:03:05 -- nvmf/common.sh@446 -- # tail -n +2 00:05:08.784 05:03:05 -- nvmf/common.sh@446 -- # head -n 1 00:05:08.784 05:03:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:08.784 05:03:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:08.784 05:03:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:08.784 05:03:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:08.784 05:03:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:08.784 05:03:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:08.784 05:03:05 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:08.784 05:03:05 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.784 05:03:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.784 MallocForNvmf0 00:05:08.784 05:03:05 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.784 05:03:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.784 MallocForNvmf1 00:05:08.784 05:03:05 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:08.784 05:03:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:08.784 [2024-11-20 05:03:05.561395] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:09.044 [2024-11-20 05:03:05.616093] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1226220/0x1225860) succeed. 00:05:09.044 [2024-11-20 05:03:05.626107] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1228360/0x1225de0) succeed. 00:05:09.044 [2024-11-20 05:03:05.626130] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:05:09.044 [2024-11-20 05:03:05.628306] iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:05:09.044 [2024-11-20 05:03:05.628317] iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:09.044 [2024-11-20 05:03:05.629980] transport.c: 625:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:05:09.044 05:03:05 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.044 05:03:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.044 05:03:05 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.044 05:03:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.304 05:03:06 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.304 05:03:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.563 05:03:06 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:09.563 05:03:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:09.563 [2024-11-20 05:03:06.380259] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:09.823 05:03:06 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:09.823 05:03:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.823 05:03:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.823 05:03:06 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:09.823 05:03:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.823 05:03:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.823 05:03:06 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:09.823 05:03:06 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.823 05:03:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.823 MallocBdevForConfigChangeCheck 00:05:10.083 05:03:06 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:10.083 05:03:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.083 05:03:06 -- common/autotest_common.sh@10 -- # set +x 00:05:10.083 05:03:06 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:10.083 05:03:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.342 05:03:06 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:10.342 INFO: shutting down applications... 00:05:10.342 05:03:06 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:10.342 05:03:06 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:10.342 05:03:06 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:10.342 05:03:06 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:12.252 Calling clear_iscsi_subsystem 00:05:12.252 Calling clear_nvmf_subsystem 00:05:12.252 Calling clear_nbd_subsystem 00:05:12.252 Calling clear_ublk_subsystem 00:05:12.252 Calling clear_vhost_blk_subsystem 00:05:12.252 Calling clear_vhost_scsi_subsystem 00:05:12.252 Calling clear_scheduler_subsystem 00:05:12.252 Calling clear_bdev_subsystem 00:05:12.252 Calling clear_accel_subsystem 00:05:12.252 Calling clear_vmd_subsystem 00:05:12.252 Calling clear_sock_subsystem 00:05:12.252 Calling clear_iobuf_subsystem 00:05:12.252 05:03:08 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py 00:05:12.252 05:03:08 -- json_config/json_config.sh@396 -- # count=100 00:05:12.252 05:03:08 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:12.252 05:03:08 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.252 05:03:08 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.252 05:03:08 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:12.252 05:03:08 -- json_config/json_config.sh@398 -- # break 00:05:12.252 05:03:08 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:12.252 05:03:08 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:12.252 05:03:08 -- json_config/json_config.sh@120 -- # local app=target 00:05:12.252 05:03:08 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:12.252 05:03:08 -- json_config/json_config.sh@124 -- # [[ -n 101632 ]] 00:05:12.252 05:03:08 -- json_config/json_config.sh@127 -- # kill -SIGINT 101632 00:05:12.252 05:03:08 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:12.252 05:03:08 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:12.252 05:03:08 -- json_config/json_config.sh@130 -- # kill -0 101632 00:05:12.252 05:03:08 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:12.821 05:03:09 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:12.821 05:03:09 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:12.821 05:03:09 -- json_config/json_config.sh@130 -- # kill -0 101632 00:05:12.821 05:03:09 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:12.821 05:03:09 -- json_config/json_config.sh@132 -- # break 00:05:12.821 05:03:09 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:12.821 05:03:09 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:12.821 SPDK target shutdown done 00:05:12.821 05:03:09 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:12.821 INFO: relaunching applications... 00:05:12.821 05:03:09 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.821 05:03:09 -- json_config/json_config.sh@98 -- # local app=target 00:05:12.821 05:03:09 -- json_config/json_config.sh@99 -- # shift 00:05:12.821 05:03:09 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:12.821 05:03:09 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:12.821 05:03:09 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:12.821 05:03:09 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:12.821 05:03:09 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:12.821 05:03:09 -- json_config/json_config.sh@111 -- # app_pid[$app]=106988 00:05:12.821 05:03:09 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:12.821 Waiting for target to run... 00:05:12.821 05:03:09 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.821 05:03:09 -- json_config/json_config.sh@114 -- # waitforlisten 106988 /var/tmp/spdk_tgt.sock 00:05:12.821 05:03:09 -- common/autotest_common.sh@829 -- # '[' -z 106988 ']' 00:05:12.821 05:03:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.821 05:03:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.821 05:03:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.821 05:03:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.821 05:03:09 -- common/autotest_common.sh@10 -- # set +x 00:05:12.821 [2024-11-20 05:03:09.482681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.821 [2024-11-20 05:03:09.482737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106988 ] 00:05:12.821 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.391 [2024-11-20 05:03:09.935781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.391 [2024-11-20 05:03:10.023884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.391 [2024-11-20 05:03:10.023987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.688 [2024-11-20 05:03:13.022931] transport.c: 284:nvmf_transport_create: *WARNING*: The num_shared_buffers value (4095) is larger than the available iobuf pool size (1024). Please increase the iobuf pool sizes. 00:05:16.688 [2024-11-20 05:03:13.037732] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x24041f0/0x2408970) succeed. 00:05:16.688 [2024-11-20 05:03:13.047904] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x24090a0/0x2408ef0) succeed. 00:05:16.688 [2024-11-20 05:03:13.050076] iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:05:16.688 [2024-11-20 05:03:13.050091] iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:16.688 [2024-11-20 05:03:13.051731] transport.c: 625:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:05:16.688 [2024-11-20 05:03:13.079953] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:16.948 05:03:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.948 05:03:13 -- common/autotest_common.sh@862 -- # return 0 00:05:16.948 05:03:13 -- json_config/json_config.sh@115 -- # echo '' 00:05:16.948 00:05:16.948 05:03:13 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:16.948 05:03:13 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:16.948 INFO: Checking if target configuration is the same... 00:05:16.948 05:03:13 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:16.948 05:03:13 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.948 05:03:13 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.948 + '[' 2 -ne 2 ']' 00:05:16.948 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:16.948 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:05:16.948 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:05:16.948 +++ basename /dev/fd/62 00:05:16.948 ++ mktemp /tmp/62.XXX 00:05:16.948 + tmp_file_1=/tmp/62.HlU 00:05:16.948 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.948 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.948 + tmp_file_2=/tmp/spdk_tgt_config.json.Qzw 00:05:16.948 + ret=0 00:05:16.948 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:17.208 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:17.208 + diff -u /tmp/62.HlU /tmp/spdk_tgt_config.json.Qzw 00:05:17.208 + echo 'INFO: JSON config files are the same' 00:05:17.208 INFO: JSON config files are the same 00:05:17.208 + rm /tmp/62.HlU /tmp/spdk_tgt_config.json.Qzw 00:05:17.208 + exit 0 00:05:17.208 05:03:13 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:17.208 05:03:13 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:17.208 INFO: changing configuration and checking if this can be detected... 00:05:17.208 05:03:13 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:17.208 05:03:13 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:17.468 05:03:14 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:17.468 05:03:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.468 05:03:14 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.468 + '[' 2 -ne 2 ']' 00:05:17.468 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:17.468 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:05:17.468 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:05:17.468 +++ basename /dev/fd/62 00:05:17.468 ++ mktemp /tmp/62.XXX 00:05:17.468 + tmp_file_1=/tmp/62.Qw8 00:05:17.468 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.468 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:17.468 + tmp_file_2=/tmp/spdk_tgt_config.json.HFZ 00:05:17.468 + ret=0 00:05:17.468 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:17.727 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:17.727 + diff -u /tmp/62.Qw8 /tmp/spdk_tgt_config.json.HFZ 00:05:17.727 + ret=1 00:05:17.727 + echo '=== Start of file: /tmp/62.Qw8 ===' 00:05:17.727 + cat /tmp/62.Qw8 00:05:17.727 + echo '=== End of file: /tmp/62.Qw8 ===' 00:05:17.727 + echo '' 00:05:17.727 + echo '=== Start of file: /tmp/spdk_tgt_config.json.HFZ ===' 00:05:17.727 + cat /tmp/spdk_tgt_config.json.HFZ 00:05:17.727 + echo '=== End of file: /tmp/spdk_tgt_config.json.HFZ ===' 00:05:17.727 + echo '' 00:05:17.727 + rm /tmp/62.Qw8 /tmp/spdk_tgt_config.json.HFZ 00:05:17.727 + exit 1 00:05:17.727 05:03:14 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:17.727 INFO: configuration change detected. 00:05:17.727 05:03:14 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:17.727 05:03:14 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:17.727 05:03:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.727 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:05:17.727 05:03:14 -- json_config/json_config.sh@360 -- # local ret=0 00:05:17.727 05:03:14 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:17.727 05:03:14 -- json_config/json_config.sh@370 -- # [[ -n 106988 ]] 00:05:17.727 05:03:14 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:17.987 05:03:14 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:17.987 05:03:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.987 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:05:17.987 05:03:14 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:17.987 05:03:14 -- json_config/json_config.sh@246 -- # uname -s 00:05:17.987 05:03:14 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:17.987 05:03:14 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:17.987 05:03:14 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:17.987 05:03:14 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:17.987 05:03:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.987 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:05:17.987 05:03:14 -- json_config/json_config.sh@376 -- # killprocess 106988 00:05:17.987 05:03:14 -- common/autotest_common.sh@936 -- # '[' -z 106988 ']' 00:05:17.987 05:03:14 -- common/autotest_common.sh@940 -- # kill -0 106988 00:05:17.987 05:03:14 -- common/autotest_common.sh@941 -- # uname 00:05:17.987 05:03:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.987 05:03:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 106988 00:05:17.987 05:03:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.987 05:03:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.987 05:03:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 106988' 00:05:17.987 killing process with pid 106988 00:05:17.987 05:03:14 -- common/autotest_common.sh@955 -- # kill 106988 00:05:17.987 05:03:14 -- common/autotest_common.sh@960 -- # wait 106988 00:05:19.895 05:03:16 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.895 05:03:16 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:19.895 05:03:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.895 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:19.895 05:03:16 -- json_config/json_config.sh@381 -- # return 0 00:05:19.895 05:03:16 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:19.895 INFO: Success 00:05:19.895 05:03:16 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:19.895 05:03:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:05:19.895 05:03:16 -- nvmf/common.sh@116 -- # sync 00:05:19.895 05:03:16 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:05:19.895 05:03:16 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:05:19.895 05:03:16 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:05:19.895 05:03:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:05:19.895 05:03:16 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:05:19.895 00:05:19.895 real 0m21.656s 00:05:19.895 user 0m24.461s 00:05:19.895 sys 0m6.668s 00:05:19.895 05:03:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.895 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:19.895 ************************************ 00:05:19.895 END TEST json_config 00:05:19.895 ************************************ 00:05:19.895 05:03:16 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.895 05:03:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.895 05:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.895 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:19.895 ************************************ 00:05:19.895 START TEST json_config_extra_key 00:05:19.895 ************************************ 00:05:19.895 05:03:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.895 05:03:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.895 05:03:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.895 05:03:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.896 05:03:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.896 05:03:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.896 05:03:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.896 05:03:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.896 05:03:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.896 05:03:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.896 05:03:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.896 05:03:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.896 05:03:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.896 05:03:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.896 05:03:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.896 05:03:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.896 05:03:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.896 05:03:16 -- scripts/common.sh@344 -- # : 1 00:05:19.896 05:03:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.896 05:03:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.896 05:03:16 -- scripts/common.sh@364 -- # decimal 1 00:05:19.896 05:03:16 -- scripts/common.sh@352 -- # local d=1 00:05:19.896 05:03:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.896 05:03:16 -- scripts/common.sh@354 -- # echo 1 00:05:19.896 05:03:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.896 05:03:16 -- scripts/common.sh@365 -- # decimal 2 00:05:19.896 05:03:16 -- scripts/common.sh@352 -- # local d=2 00:05:19.896 05:03:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.896 05:03:16 -- scripts/common.sh@354 -- # echo 2 00:05:19.896 05:03:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.896 05:03:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.896 05:03:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.896 05:03:16 -- scripts/common.sh@367 -- # return 0 00:05:19.896 05:03:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.896 05:03:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.896 --rc genhtml_branch_coverage=1 00:05:19.896 --rc genhtml_function_coverage=1 00:05:19.896 --rc genhtml_legend=1 00:05:19.896 --rc geninfo_all_blocks=1 00:05:19.896 --rc geninfo_unexecuted_blocks=1 00:05:19.896 00:05:19.896 ' 00:05:19.896 05:03:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.896 --rc genhtml_branch_coverage=1 00:05:19.896 --rc genhtml_function_coverage=1 00:05:19.896 --rc genhtml_legend=1 00:05:19.896 --rc geninfo_all_blocks=1 00:05:19.896 --rc geninfo_unexecuted_blocks=1 00:05:19.896 00:05:19.896 ' 00:05:19.896 05:03:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.896 --rc genhtml_branch_coverage=1 00:05:19.896 --rc genhtml_function_coverage=1 00:05:19.896 --rc genhtml_legend=1 00:05:19.896 --rc geninfo_all_blocks=1 00:05:19.896 --rc geninfo_unexecuted_blocks=1 00:05:19.896 00:05:19.896 ' 00:05:19.896 05:03:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.896 --rc genhtml_branch_coverage=1 00:05:19.896 --rc genhtml_function_coverage=1 00:05:19.896 --rc genhtml_legend=1 00:05:19.896 --rc geninfo_all_blocks=1 00:05:19.896 --rc geninfo_unexecuted_blocks=1 00:05:19.896 00:05:19.896 ' 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.896 05:03:16 -- nvmf/common.sh@7 -- # uname -s 00:05:19.896 05:03:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.896 05:03:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.896 05:03:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.896 05:03:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.896 05:03:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.896 05:03:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.896 05:03:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.896 05:03:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.896 05:03:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.896 05:03:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.896 05:03:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:19.896 05:03:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:19.896 05:03:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.896 05:03:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.896 05:03:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.896 05:03:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:05:19.896 05:03:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.896 05:03:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.896 05:03:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.896 05:03:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.896 05:03:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.896 05:03:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.896 05:03:16 -- paths/export.sh@5 -- # export PATH 00:05:19.896 05:03:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.896 05:03:16 -- nvmf/common.sh@46 -- # : 0 00:05:19.896 05:03:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:19.896 05:03:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:19.896 05:03:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:19.896 05:03:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.896 05:03:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.896 05:03:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:19.896 05:03:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:19.896 05:03:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:19.896 INFO: launching applications... 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=108275 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:19.896 Waiting for target to run... 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 108275 /var/tmp/spdk_tgt.sock 00:05:19.896 05:03:16 -- common/autotest_common.sh@829 -- # '[' -z 108275 ']' 00:05:19.896 05:03:16 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.896 05:03:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.896 05:03:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.896 05:03:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.896 05:03:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.896 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:19.896 [2024-11-20 05:03:16.550775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.896 [2024-11-20 05:03:16.550823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108275 ] 00:05:19.896 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.156 [2024-11-20 05:03:16.824845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.156 [2024-11-20 05:03:16.888562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.156 [2024-11-20 05:03:16.888675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.724 05:03:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.724 05:03:17 -- common/autotest_common.sh@862 -- # return 0 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:20.724 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:20.724 INFO: shutting down applications... 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 108275 ]] 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 108275 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@50 -- # kill -0 108275 00:05:20.724 05:03:17 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:21.293 05:03:17 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:21.293 05:03:17 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:21.293 05:03:17 -- json_config/json_config_extra_key.sh@50 -- # kill -0 108275 00:05:21.293 05:03:17 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:21.293 05:03:17 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:21.293 05:03:17 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:21.293 05:03:17 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:21.293 SPDK target shutdown done 00:05:21.293 05:03:17 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:21.293 Success 00:05:21.293 00:05:21.293 real 0m1.531s 00:05:21.293 user 0m1.333s 00:05:21.293 sys 0m0.387s 00:05:21.293 05:03:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.293 05:03:17 -- common/autotest_common.sh@10 -- # set +x 00:05:21.293 ************************************ 00:05:21.293 END TEST json_config_extra_key 00:05:21.293 ************************************ 00:05:21.293 05:03:17 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.293 05:03:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.293 05:03:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.293 05:03:17 -- common/autotest_common.sh@10 -- # set +x 00:05:21.293 ************************************ 00:05:21.293 START TEST alias_rpc 00:05:21.293 ************************************ 00:05:21.293 05:03:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.293 * Looking for test storage... 00:05:21.293 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc 00:05:21.293 05:03:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.293 05:03:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.293 05:03:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.293 05:03:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.293 05:03:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.293 05:03:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.293 05:03:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.293 05:03:18 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.293 05:03:18 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.293 05:03:18 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.293 05:03:18 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.293 05:03:18 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.293 05:03:18 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.293 05:03:18 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.293 05:03:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.293 05:03:18 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.293 05:03:18 -- scripts/common.sh@344 -- # : 1 00:05:21.293 05:03:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.293 05:03:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.293 05:03:18 -- scripts/common.sh@364 -- # decimal 1 00:05:21.293 05:03:18 -- scripts/common.sh@352 -- # local d=1 00:05:21.293 05:03:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.293 05:03:18 -- scripts/common.sh@354 -- # echo 1 00:05:21.293 05:03:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.293 05:03:18 -- scripts/common.sh@365 -- # decimal 2 00:05:21.293 05:03:18 -- scripts/common.sh@352 -- # local d=2 00:05:21.293 05:03:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.293 05:03:18 -- scripts/common.sh@354 -- # echo 2 00:05:21.293 05:03:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.293 05:03:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.293 05:03:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.293 05:03:18 -- scripts/common.sh@367 -- # return 0 00:05:21.293 05:03:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.293 05:03:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.293 --rc genhtml_branch_coverage=1 00:05:21.293 --rc genhtml_function_coverage=1 00:05:21.293 --rc genhtml_legend=1 00:05:21.293 --rc geninfo_all_blocks=1 00:05:21.293 --rc geninfo_unexecuted_blocks=1 00:05:21.293 00:05:21.293 ' 00:05:21.294 05:03:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.294 --rc genhtml_branch_coverage=1 00:05:21.294 --rc genhtml_function_coverage=1 00:05:21.294 --rc genhtml_legend=1 00:05:21.294 --rc geninfo_all_blocks=1 00:05:21.294 --rc geninfo_unexecuted_blocks=1 00:05:21.294 00:05:21.294 ' 00:05:21.294 05:03:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.294 --rc genhtml_branch_coverage=1 00:05:21.294 --rc genhtml_function_coverage=1 00:05:21.294 --rc genhtml_legend=1 00:05:21.294 --rc geninfo_all_blocks=1 00:05:21.294 --rc geninfo_unexecuted_blocks=1 00:05:21.294 00:05:21.294 ' 00:05:21.294 05:03:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.294 --rc genhtml_branch_coverage=1 00:05:21.294 --rc genhtml_function_coverage=1 00:05:21.294 --rc genhtml_legend=1 00:05:21.294 --rc geninfo_all_blocks=1 00:05:21.294 --rc geninfo_unexecuted_blocks=1 00:05:21.294 00:05:21.294 ' 00:05:21.294 05:03:18 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.294 05:03:18 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=108567 00:05:21.294 05:03:18 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.294 05:03:18 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 108567 00:05:21.294 05:03:18 -- common/autotest_common.sh@829 -- # '[' -z 108567 ']' 00:05:21.294 05:03:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.294 05:03:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.294 05:03:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.294 05:03:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.294 05:03:18 -- common/autotest_common.sh@10 -- # set +x 00:05:21.294 [2024-11-20 05:03:18.112219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.294 [2024-11-20 05:03:18.112273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108567 ] 00:05:21.553 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.553 [2024-11-20 05:03:18.178102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.553 [2024-11-20 05:03:18.246044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.553 [2024-11-20 05:03:18.246185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.120 05:03:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.120 05:03:18 -- common/autotest_common.sh@862 -- # return 0 00:05:22.120 05:03:18 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:22.380 05:03:19 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 108567 00:05:22.380 05:03:19 -- common/autotest_common.sh@936 -- # '[' -z 108567 ']' 00:05:22.380 05:03:19 -- common/autotest_common.sh@940 -- # kill -0 108567 00:05:22.380 05:03:19 -- common/autotest_common.sh@941 -- # uname 00:05:22.380 05:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.380 05:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108567 00:05:22.380 05:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.380 05:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.380 05:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108567' 00:05:22.380 killing process with pid 108567 00:05:22.380 05:03:19 -- common/autotest_common.sh@955 -- # kill 108567 00:05:22.380 05:03:19 -- common/autotest_common.sh@960 -- # wait 108567 00:05:22.948 00:05:22.948 real 0m1.605s 00:05:22.948 user 0m1.736s 00:05:22.948 sys 0m0.429s 00:05:22.948 05:03:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.948 05:03:19 -- common/autotest_common.sh@10 -- # set +x 00:05:22.948 ************************************ 00:05:22.948 END TEST alias_rpc 00:05:22.948 ************************************ 00:05:22.948 05:03:19 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:22.948 05:03:19 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.948 05:03:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.948 05:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.948 05:03:19 -- common/autotest_common.sh@10 -- # set +x 00:05:22.948 ************************************ 00:05:22.948 START TEST spdkcli_tcp 00:05:22.948 ************************************ 00:05:22.948 05:03:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.948 * Looking for test storage... 00:05:22.948 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:05:22.948 05:03:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:22.948 05:03:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:22.948 05:03:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.948 05:03:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.948 05:03:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.948 05:03:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.948 05:03:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.948 05:03:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.948 05:03:19 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.948 05:03:19 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.948 05:03:19 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.948 05:03:19 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.948 05:03:19 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.948 05:03:19 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.948 05:03:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.948 05:03:19 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.948 05:03:19 -- scripts/common.sh@344 -- # : 1 00:05:22.948 05:03:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.948 05:03:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.948 05:03:19 -- scripts/common.sh@364 -- # decimal 1 00:05:22.948 05:03:19 -- scripts/common.sh@352 -- # local d=1 00:05:22.948 05:03:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.948 05:03:19 -- scripts/common.sh@354 -- # echo 1 00:05:22.948 05:03:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.948 05:03:19 -- scripts/common.sh@365 -- # decimal 2 00:05:22.948 05:03:19 -- scripts/common.sh@352 -- # local d=2 00:05:22.948 05:03:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.948 05:03:19 -- scripts/common.sh@354 -- # echo 2 00:05:22.948 05:03:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.948 05:03:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.948 05:03:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.948 05:03:19 -- scripts/common.sh@367 -- # return 0 00:05:22.948 05:03:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.948 05:03:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.948 --rc genhtml_branch_coverage=1 00:05:22.948 --rc genhtml_function_coverage=1 00:05:22.948 --rc genhtml_legend=1 00:05:22.948 --rc geninfo_all_blocks=1 00:05:22.948 --rc geninfo_unexecuted_blocks=1 00:05:22.948 00:05:22.948 ' 00:05:22.948 05:03:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.948 --rc genhtml_branch_coverage=1 00:05:22.948 --rc genhtml_function_coverage=1 00:05:22.948 --rc genhtml_legend=1 00:05:22.948 --rc geninfo_all_blocks=1 00:05:22.948 --rc geninfo_unexecuted_blocks=1 00:05:22.948 00:05:22.948 ' 00:05:22.948 05:03:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.948 --rc genhtml_branch_coverage=1 00:05:22.948 --rc genhtml_function_coverage=1 00:05:22.948 --rc genhtml_legend=1 00:05:22.948 --rc geninfo_all_blocks=1 00:05:22.948 --rc geninfo_unexecuted_blocks=1 00:05:22.948 00:05:22.948 ' 00:05:22.948 05:03:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.948 --rc genhtml_branch_coverage=1 00:05:22.948 --rc genhtml_function_coverage=1 00:05:22.948 --rc genhtml_legend=1 00:05:22.948 --rc geninfo_all_blocks=1 00:05:22.948 --rc geninfo_unexecuted_blocks=1 00:05:22.948 00:05:22.948 ' 00:05:22.948 05:03:19 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:05:22.948 05:03:19 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:22.948 05:03:19 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:05:22.948 05:03:19 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:22.948 05:03:19 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:22.948 05:03:19 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:22.948 05:03:19 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:22.948 05:03:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.948 05:03:19 -- common/autotest_common.sh@10 -- # set +x 00:05:22.948 05:03:19 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=108859 00:05:22.948 05:03:19 -- spdkcli/tcp.sh@27 -- # waitforlisten 108859 00:05:22.948 05:03:19 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.948 05:03:19 -- common/autotest_common.sh@829 -- # '[' -z 108859 ']' 00:05:22.948 05:03:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.948 05:03:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.948 05:03:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.948 05:03:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.948 05:03:19 -- common/autotest_common.sh@10 -- # set +x 00:05:22.948 [2024-11-20 05:03:19.763779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.948 [2024-11-20 05:03:19.763823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108859 ] 00:05:23.208 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.208 [2024-11-20 05:03:19.830038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.208 [2024-11-20 05:03:19.899627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.208 [2024-11-20 05:03:19.899837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.208 [2024-11-20 05:03:19.899837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.776 05:03:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.776 05:03:20 -- common/autotest_common.sh@862 -- # return 0 00:05:23.776 05:03:20 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.776 05:03:20 -- spdkcli/tcp.sh@31 -- # socat_pid=109091 00:05:23.776 05:03:20 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.036 [ 00:05:24.036 "bdev_malloc_delete", 00:05:24.036 "bdev_malloc_create", 00:05:24.036 "bdev_null_resize", 00:05:24.036 "bdev_null_delete", 00:05:24.036 "bdev_null_create", 00:05:24.036 "bdev_nvme_cuse_unregister", 00:05:24.036 "bdev_nvme_cuse_register", 00:05:24.036 "bdev_opal_new_user", 00:05:24.036 "bdev_opal_set_lock_state", 00:05:24.036 "bdev_opal_delete", 00:05:24.036 "bdev_opal_get_info", 00:05:24.036 "bdev_opal_create", 00:05:24.036 "bdev_nvme_opal_revert", 00:05:24.036 "bdev_nvme_opal_init", 00:05:24.036 "bdev_nvme_send_cmd", 00:05:24.036 "bdev_nvme_get_path_iostat", 00:05:24.036 "bdev_nvme_get_mdns_discovery_info", 00:05:24.036 "bdev_nvme_stop_mdns_discovery", 00:05:24.036 "bdev_nvme_start_mdns_discovery", 00:05:24.036 "bdev_nvme_set_multipath_policy", 00:05:24.036 "bdev_nvme_set_preferred_path", 00:05:24.036 "bdev_nvme_get_io_paths", 00:05:24.036 "bdev_nvme_remove_error_injection", 00:05:24.036 "bdev_nvme_add_error_injection", 00:05:24.036 "bdev_nvme_get_discovery_info", 00:05:24.036 "bdev_nvme_stop_discovery", 00:05:24.036 "bdev_nvme_start_discovery", 00:05:24.036 "bdev_nvme_get_controller_health_info", 00:05:24.036 "bdev_nvme_disable_controller", 00:05:24.036 "bdev_nvme_enable_controller", 00:05:24.036 "bdev_nvme_reset_controller", 00:05:24.036 "bdev_nvme_get_transport_statistics", 00:05:24.036 "bdev_nvme_apply_firmware", 00:05:24.036 "bdev_nvme_detach_controller", 00:05:24.036 "bdev_nvme_get_controllers", 00:05:24.036 "bdev_nvme_attach_controller", 00:05:24.036 "bdev_nvme_set_hotplug", 00:05:24.036 "bdev_nvme_set_options", 00:05:24.036 "bdev_passthru_delete", 00:05:24.036 "bdev_passthru_create", 00:05:24.036 "bdev_lvol_grow_lvstore", 00:05:24.036 "bdev_lvol_get_lvols", 00:05:24.036 "bdev_lvol_get_lvstores", 00:05:24.036 "bdev_lvol_delete", 00:05:24.036 "bdev_lvol_set_read_only", 00:05:24.036 "bdev_lvol_resize", 00:05:24.036 "bdev_lvol_decouple_parent", 00:05:24.036 "bdev_lvol_inflate", 00:05:24.036 "bdev_lvol_rename", 00:05:24.036 "bdev_lvol_clone_bdev", 00:05:24.036 "bdev_lvol_clone", 00:05:24.036 "bdev_lvol_snapshot", 00:05:24.036 "bdev_lvol_create", 00:05:24.036 "bdev_lvol_delete_lvstore", 00:05:24.036 "bdev_lvol_rename_lvstore", 00:05:24.036 "bdev_lvol_create_lvstore", 00:05:24.036 "bdev_raid_set_options", 00:05:24.036 "bdev_raid_remove_base_bdev", 00:05:24.036 "bdev_raid_add_base_bdev", 00:05:24.036 "bdev_raid_delete", 00:05:24.036 "bdev_raid_create", 00:05:24.036 "bdev_raid_get_bdevs", 00:05:24.036 "bdev_error_inject_error", 00:05:24.036 "bdev_error_delete", 00:05:24.036 "bdev_error_create", 00:05:24.036 "bdev_split_delete", 00:05:24.036 "bdev_split_create", 00:05:24.036 "bdev_delay_delete", 00:05:24.036 "bdev_delay_create", 00:05:24.036 "bdev_delay_update_latency", 00:05:24.036 "bdev_zone_block_delete", 00:05:24.036 "bdev_zone_block_create", 00:05:24.036 "blobfs_create", 00:05:24.036 "blobfs_detect", 00:05:24.036 "blobfs_set_cache_size", 00:05:24.036 "bdev_aio_delete", 00:05:24.036 "bdev_aio_rescan", 00:05:24.036 "bdev_aio_create", 00:05:24.036 "bdev_ftl_set_property", 00:05:24.036 "bdev_ftl_get_properties", 00:05:24.036 "bdev_ftl_get_stats", 00:05:24.036 "bdev_ftl_unmap", 00:05:24.036 "bdev_ftl_unload", 00:05:24.036 "bdev_ftl_delete", 00:05:24.036 "bdev_ftl_load", 00:05:24.036 "bdev_ftl_create", 00:05:24.036 "bdev_virtio_attach_controller", 00:05:24.036 "bdev_virtio_scsi_get_devices", 00:05:24.036 "bdev_virtio_detach_controller", 00:05:24.036 "bdev_virtio_blk_set_hotplug", 00:05:24.036 "bdev_iscsi_delete", 00:05:24.036 "bdev_iscsi_create", 00:05:24.036 "bdev_iscsi_set_options", 00:05:24.036 "accel_error_inject_error", 00:05:24.036 "ioat_scan_accel_module", 00:05:24.036 "dsa_scan_accel_module", 00:05:24.036 "iaa_scan_accel_module", 00:05:24.036 "iscsi_set_options", 00:05:24.036 "iscsi_get_auth_groups", 00:05:24.036 "iscsi_auth_group_remove_secret", 00:05:24.036 "iscsi_auth_group_add_secret", 00:05:24.036 "iscsi_delete_auth_group", 00:05:24.036 "iscsi_create_auth_group", 00:05:24.036 "iscsi_set_discovery_auth", 00:05:24.036 "iscsi_get_options", 00:05:24.036 "iscsi_target_node_request_logout", 00:05:24.036 "iscsi_target_node_set_redirect", 00:05:24.036 "iscsi_target_node_set_auth", 00:05:24.036 "iscsi_target_node_add_lun", 00:05:24.036 "iscsi_get_connections", 00:05:24.036 "iscsi_portal_group_set_auth", 00:05:24.036 "iscsi_start_portal_group", 00:05:24.036 "iscsi_delete_portal_group", 00:05:24.036 "iscsi_create_portal_group", 00:05:24.036 "iscsi_get_portal_groups", 00:05:24.036 "iscsi_delete_target_node", 00:05:24.036 "iscsi_target_node_remove_pg_ig_maps", 00:05:24.036 "iscsi_target_node_add_pg_ig_maps", 00:05:24.036 "iscsi_create_target_node", 00:05:24.036 "iscsi_get_target_nodes", 00:05:24.036 "iscsi_delete_initiator_group", 00:05:24.036 "iscsi_initiator_group_remove_initiators", 00:05:24.036 "iscsi_initiator_group_add_initiators", 00:05:24.036 "iscsi_create_initiator_group", 00:05:24.036 "iscsi_get_initiator_groups", 00:05:24.036 "nvmf_set_crdt", 00:05:24.036 "nvmf_set_config", 00:05:24.036 "nvmf_set_max_subsystems", 00:05:24.036 "nvmf_subsystem_get_listeners", 00:05:24.036 "nvmf_subsystem_get_qpairs", 00:05:24.036 "nvmf_subsystem_get_controllers", 00:05:24.036 "nvmf_get_stats", 00:05:24.036 "nvmf_get_transports", 00:05:24.036 "nvmf_create_transport", 00:05:24.036 "nvmf_get_targets", 00:05:24.036 "nvmf_delete_target", 00:05:24.036 "nvmf_create_target", 00:05:24.036 "nvmf_subsystem_allow_any_host", 00:05:24.036 "nvmf_subsystem_remove_host", 00:05:24.036 "nvmf_subsystem_add_host", 00:05:24.036 "nvmf_subsystem_remove_ns", 00:05:24.036 "nvmf_subsystem_add_ns", 00:05:24.036 "nvmf_subsystem_listener_set_ana_state", 00:05:24.036 "nvmf_discovery_get_referrals", 00:05:24.036 "nvmf_discovery_remove_referral", 00:05:24.036 "nvmf_discovery_add_referral", 00:05:24.036 "nvmf_subsystem_remove_listener", 00:05:24.036 "nvmf_subsystem_add_listener", 00:05:24.036 "nvmf_delete_subsystem", 00:05:24.036 "nvmf_create_subsystem", 00:05:24.036 "nvmf_get_subsystems", 00:05:24.036 "env_dpdk_get_mem_stats", 00:05:24.036 "nbd_get_disks", 00:05:24.036 "nbd_stop_disk", 00:05:24.036 "nbd_start_disk", 00:05:24.036 "ublk_recover_disk", 00:05:24.036 "ublk_get_disks", 00:05:24.036 "ublk_stop_disk", 00:05:24.036 "ublk_start_disk", 00:05:24.036 "ublk_destroy_target", 00:05:24.036 "ublk_create_target", 00:05:24.036 "virtio_blk_create_transport", 00:05:24.036 "virtio_blk_get_transports", 00:05:24.036 "vhost_controller_set_coalescing", 00:05:24.036 "vhost_get_controllers", 00:05:24.036 "vhost_delete_controller", 00:05:24.036 "vhost_create_blk_controller", 00:05:24.036 "vhost_scsi_controller_remove_target", 00:05:24.036 "vhost_scsi_controller_add_target", 00:05:24.036 "vhost_start_scsi_controller", 00:05:24.036 "vhost_create_scsi_controller", 00:05:24.036 "thread_set_cpumask", 00:05:24.036 "framework_get_scheduler", 00:05:24.036 "framework_set_scheduler", 00:05:24.036 "framework_get_reactors", 00:05:24.036 "thread_get_io_channels", 00:05:24.036 "thread_get_pollers", 00:05:24.036 "thread_get_stats", 00:05:24.036 "framework_monitor_context_switch", 00:05:24.036 "spdk_kill_instance", 00:05:24.036 "log_enable_timestamps", 00:05:24.036 "log_get_flags", 00:05:24.036 "log_clear_flag", 00:05:24.036 "log_set_flag", 00:05:24.036 "log_get_level", 00:05:24.036 "log_set_level", 00:05:24.036 "log_get_print_level", 00:05:24.036 "log_set_print_level", 00:05:24.036 "framework_enable_cpumask_locks", 00:05:24.036 "framework_disable_cpumask_locks", 00:05:24.036 "framework_wait_init", 00:05:24.036 "framework_start_init", 00:05:24.036 "scsi_get_devices", 00:05:24.036 "bdev_get_histogram", 00:05:24.036 "bdev_enable_histogram", 00:05:24.036 "bdev_set_qos_limit", 00:05:24.036 "bdev_set_qd_sampling_period", 00:05:24.036 "bdev_get_bdevs", 00:05:24.036 "bdev_reset_iostat", 00:05:24.036 "bdev_get_iostat", 00:05:24.036 "bdev_examine", 00:05:24.036 "bdev_wait_for_examine", 00:05:24.036 "bdev_set_options", 00:05:24.036 "notify_get_notifications", 00:05:24.036 "notify_get_types", 00:05:24.036 "accel_get_stats", 00:05:24.036 "accel_set_options", 00:05:24.036 "accel_set_driver", 00:05:24.036 "accel_crypto_key_destroy", 00:05:24.036 "accel_crypto_keys_get", 00:05:24.036 "accel_crypto_key_create", 00:05:24.036 "accel_assign_opc", 00:05:24.037 "accel_get_module_info", 00:05:24.037 "accel_get_opc_assignments", 00:05:24.037 "vmd_rescan", 00:05:24.037 "vmd_remove_device", 00:05:24.037 "vmd_enable", 00:05:24.037 "sock_set_default_impl", 00:05:24.037 "sock_impl_set_options", 00:05:24.037 "sock_impl_get_options", 00:05:24.037 "iobuf_get_stats", 00:05:24.037 "iobuf_set_options", 00:05:24.037 "framework_get_pci_devices", 00:05:24.037 "framework_get_config", 00:05:24.037 "framework_get_subsystems", 00:05:24.037 "trace_get_info", 00:05:24.037 "trace_get_tpoint_group_mask", 00:05:24.037 "trace_disable_tpoint_group", 00:05:24.037 "trace_enable_tpoint_group", 00:05:24.037 "trace_clear_tpoint_mask", 00:05:24.037 "trace_set_tpoint_mask", 00:05:24.037 "spdk_get_version", 00:05:24.037 "rpc_get_methods" 00:05:24.037 ] 00:05:24.037 05:03:20 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:24.037 05:03:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.037 05:03:20 -- common/autotest_common.sh@10 -- # set +x 00:05:24.037 05:03:20 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:24.037 05:03:20 -- spdkcli/tcp.sh@38 -- # killprocess 108859 00:05:24.037 05:03:20 -- common/autotest_common.sh@936 -- # '[' -z 108859 ']' 00:05:24.037 05:03:20 -- common/autotest_common.sh@940 -- # kill -0 108859 00:05:24.037 05:03:20 -- common/autotest_common.sh@941 -- # uname 00:05:24.037 05:03:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.037 05:03:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108859 00:05:24.037 05:03:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.037 05:03:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.037 05:03:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108859' 00:05:24.037 killing process with pid 108859 00:05:24.037 05:03:20 -- common/autotest_common.sh@955 -- # kill 108859 00:05:24.037 05:03:20 -- common/autotest_common.sh@960 -- # wait 108859 00:05:24.605 00:05:24.605 real 0m1.647s 00:05:24.605 user 0m2.959s 00:05:24.605 sys 0m0.505s 00:05:24.605 05:03:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.605 05:03:21 -- common/autotest_common.sh@10 -- # set +x 00:05:24.605 ************************************ 00:05:24.605 END TEST spdkcli_tcp 00:05:24.605 ************************************ 00:05:24.606 05:03:21 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.606 05:03:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.606 05:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.606 05:03:21 -- common/autotest_common.sh@10 -- # set +x 00:05:24.606 ************************************ 00:05:24.606 START TEST dpdk_mem_utility 00:05:24.606 ************************************ 00:05:24.606 05:03:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.606 * Looking for test storage... 00:05:24.606 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility 00:05:24.606 05:03:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:24.606 05:03:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:24.606 05:03:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:24.606 05:03:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:24.606 05:03:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:24.606 05:03:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:24.606 05:03:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:24.606 05:03:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:24.606 05:03:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:24.606 05:03:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.606 05:03:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:24.606 05:03:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:24.606 05:03:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:24.606 05:03:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:24.606 05:03:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:24.606 05:03:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:24.606 05:03:21 -- scripts/common.sh@344 -- # : 1 00:05:24.606 05:03:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:24.606 05:03:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.606 05:03:21 -- scripts/common.sh@364 -- # decimal 1 00:05:24.606 05:03:21 -- scripts/common.sh@352 -- # local d=1 00:05:24.606 05:03:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.606 05:03:21 -- scripts/common.sh@354 -- # echo 1 00:05:24.606 05:03:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:24.606 05:03:21 -- scripts/common.sh@365 -- # decimal 2 00:05:24.606 05:03:21 -- scripts/common.sh@352 -- # local d=2 00:05:24.606 05:03:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.606 05:03:21 -- scripts/common.sh@354 -- # echo 2 00:05:24.606 05:03:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:24.606 05:03:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:24.606 05:03:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:24.606 05:03:21 -- scripts/common.sh@367 -- # return 0 00:05:24.606 05:03:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.606 05:03:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:24.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.606 --rc genhtml_branch_coverage=1 00:05:24.606 --rc genhtml_function_coverage=1 00:05:24.606 --rc genhtml_legend=1 00:05:24.606 --rc geninfo_all_blocks=1 00:05:24.606 --rc geninfo_unexecuted_blocks=1 00:05:24.606 00:05:24.606 ' 00:05:24.606 05:03:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:24.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.606 --rc genhtml_branch_coverage=1 00:05:24.606 --rc genhtml_function_coverage=1 00:05:24.606 --rc genhtml_legend=1 00:05:24.606 --rc geninfo_all_blocks=1 00:05:24.606 --rc geninfo_unexecuted_blocks=1 00:05:24.606 00:05:24.606 ' 00:05:24.606 05:03:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:24.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.606 --rc genhtml_branch_coverage=1 00:05:24.606 --rc genhtml_function_coverage=1 00:05:24.606 --rc genhtml_legend=1 00:05:24.606 --rc geninfo_all_blocks=1 00:05:24.606 --rc geninfo_unexecuted_blocks=1 00:05:24.606 00:05:24.606 ' 00:05:24.606 05:03:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:24.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.606 --rc genhtml_branch_coverage=1 00:05:24.606 --rc genhtml_function_coverage=1 00:05:24.606 --rc genhtml_legend=1 00:05:24.606 --rc geninfo_all_blocks=1 00:05:24.606 --rc geninfo_unexecuted_blocks=1 00:05:24.606 00:05:24.606 ' 00:05:24.606 05:03:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:24.606 05:03:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=109281 00:05:24.606 05:03:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 109281 00:05:24.606 05:03:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.606 05:03:21 -- common/autotest_common.sh@829 -- # '[' -z 109281 ']' 00:05:24.606 05:03:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.606 05:03:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.606 05:03:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.606 05:03:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.606 05:03:21 -- common/autotest_common.sh@10 -- # set +x 00:05:24.866 [2024-11-20 05:03:21.447774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:24.866 [2024-11-20 05:03:21.447826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109281 ] 00:05:24.866 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.866 [2024-11-20 05:03:21.514311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.866 [2024-11-20 05:03:21.590173] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.866 [2024-11-20 05:03:21.590283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.434 05:03:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.434 05:03:22 -- common/autotest_common.sh@862 -- # return 0 00:05:25.434 05:03:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.434 05:03:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.434 05:03:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.434 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.434 { 00:05:25.434 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.434 } 00:05:25.434 05:03:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.434 05:03:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:25.694 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:25.694 1 heaps totaling size 814.000000 MiB 00:05:25.694 size: 814.000000 MiB heap id: 0 00:05:25.694 end heaps---------- 00:05:25.694 8 mempools totaling size 598.116089 MiB 00:05:25.694 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.694 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.694 size: 84.521057 MiB name: bdev_io_109281 00:05:25.694 size: 51.011292 MiB name: evtpool_109281 00:05:25.694 size: 50.003479 MiB name: msgpool_109281 00:05:25.694 size: 21.763794 MiB name: PDU_Pool 00:05:25.694 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.694 size: 0.026123 MiB name: Session_Pool 00:05:25.694 end mempools------- 00:05:25.694 6 memzones totaling size 4.142822 MiB 00:05:25.694 size: 1.000366 MiB name: RG_ring_0_109281 00:05:25.694 size: 1.000366 MiB name: RG_ring_1_109281 00:05:25.694 size: 1.000366 MiB name: RG_ring_4_109281 00:05:25.694 size: 1.000366 MiB name: RG_ring_5_109281 00:05:25.694 size: 0.125366 MiB name: RG_ring_2_109281 00:05:25.694 size: 0.015991 MiB name: RG_ring_3_109281 00:05:25.694 end memzones------- 00:05:25.694 05:03:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.694 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:25.694 list of free elements. size: 12.519348 MiB 00:05:25.694 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:25.694 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:25.694 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:25.694 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:25.694 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:25.694 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:25.694 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:25.694 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:25.694 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:25.694 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:25.694 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:25.694 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:25.694 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:25.694 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:25.694 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:25.694 list of standard malloc elements. size: 199.218079 MiB 00:05:25.694 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:25.694 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:25.694 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:25.694 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:25.694 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:25.694 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:25.694 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:25.694 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.694 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:25.694 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:25.695 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:25.695 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:25.695 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:25.695 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:25.695 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:25.695 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:25.695 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:25.695 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:25.695 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:25.695 list of memzone associated elements. size: 602.262573 MiB 00:05:25.695 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:25.695 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.695 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:25.695 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.695 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:25.695 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_109281_0 00:05:25.695 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:25.695 associated memzone info: size: 48.002930 MiB name: MP_evtpool_109281_0 00:05:25.695 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:25.695 associated memzone info: size: 48.002930 MiB name: MP_msgpool_109281_0 00:05:25.695 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:25.695 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.695 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:25.695 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.695 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:25.695 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_109281 00:05:25.695 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:25.695 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_109281 00:05:25.695 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:25.695 associated memzone info: size: 1.007996 MiB name: MP_evtpool_109281 00:05:25.695 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:25.695 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.695 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:25.695 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.695 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:25.695 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.695 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:25.695 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.695 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:25.695 associated memzone info: size: 1.000366 MiB name: RG_ring_0_109281 00:05:25.695 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:25.695 associated memzone info: size: 1.000366 MiB name: RG_ring_1_109281 00:05:25.695 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:25.695 associated memzone info: size: 1.000366 MiB name: RG_ring_4_109281 00:05:25.695 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:25.695 associated memzone info: size: 1.000366 MiB name: RG_ring_5_109281 00:05:25.695 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:25.695 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_109281 00:05:25.695 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:25.695 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.695 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:25.695 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.695 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:25.695 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.695 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:25.695 associated memzone info: size: 0.125366 MiB name: RG_ring_2_109281 00:05:25.695 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:25.695 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.695 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:25.695 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.695 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:25.695 associated memzone info: size: 0.015991 MiB name: RG_ring_3_109281 00:05:25.695 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:25.695 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.695 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:25.695 associated memzone info: size: 0.000183 MiB name: MP_msgpool_109281 00:05:25.695 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:25.695 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_109281 00:05:25.695 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:25.695 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.695 05:03:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.695 05:03:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 109281 00:05:25.695 05:03:22 -- common/autotest_common.sh@936 -- # '[' -z 109281 ']' 00:05:25.695 05:03:22 -- common/autotest_common.sh@940 -- # kill -0 109281 00:05:25.695 05:03:22 -- common/autotest_common.sh@941 -- # uname 00:05:25.695 05:03:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.695 05:03:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 109281 00:05:25.695 05:03:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.695 05:03:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.695 05:03:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 109281' 00:05:25.695 killing process with pid 109281 00:05:25.695 05:03:22 -- common/autotest_common.sh@955 -- # kill 109281 00:05:25.695 05:03:22 -- common/autotest_common.sh@960 -- # wait 109281 00:05:25.954 00:05:25.955 real 0m1.506s 00:05:25.955 user 0m1.569s 00:05:25.955 sys 0m0.423s 00:05:25.955 05:03:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.955 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.955 ************************************ 00:05:25.955 END TEST dpdk_mem_utility 00:05:25.955 ************************************ 00:05:25.955 05:03:22 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:05:25.955 05:03:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.955 05:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.955 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.955 ************************************ 00:05:25.955 START TEST event 00:05:25.955 ************************************ 00:05:25.955 05:03:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:05:26.214 * Looking for test storage... 00:05:26.214 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:05:26.214 05:03:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.214 05:03:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.214 05:03:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.214 05:03:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.214 05:03:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.214 05:03:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.214 05:03:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.214 05:03:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.214 05:03:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.214 05:03:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.214 05:03:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.214 05:03:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.214 05:03:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.214 05:03:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.214 05:03:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.214 05:03:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.214 05:03:22 -- scripts/common.sh@344 -- # : 1 00:05:26.214 05:03:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.214 05:03:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.214 05:03:22 -- scripts/common.sh@364 -- # decimal 1 00:05:26.214 05:03:22 -- scripts/common.sh@352 -- # local d=1 00:05:26.214 05:03:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.214 05:03:22 -- scripts/common.sh@354 -- # echo 1 00:05:26.214 05:03:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.214 05:03:22 -- scripts/common.sh@365 -- # decimal 2 00:05:26.214 05:03:22 -- scripts/common.sh@352 -- # local d=2 00:05:26.214 05:03:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.214 05:03:22 -- scripts/common.sh@354 -- # echo 2 00:05:26.214 05:03:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.214 05:03:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.214 05:03:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.214 05:03:22 -- scripts/common.sh@367 -- # return 0 00:05:26.214 05:03:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.214 05:03:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.214 --rc genhtml_branch_coverage=1 00:05:26.214 --rc genhtml_function_coverage=1 00:05:26.214 --rc genhtml_legend=1 00:05:26.214 --rc geninfo_all_blocks=1 00:05:26.214 --rc geninfo_unexecuted_blocks=1 00:05:26.214 00:05:26.214 ' 00:05:26.214 05:03:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.214 --rc genhtml_branch_coverage=1 00:05:26.215 --rc genhtml_function_coverage=1 00:05:26.215 --rc genhtml_legend=1 00:05:26.215 --rc geninfo_all_blocks=1 00:05:26.215 --rc geninfo_unexecuted_blocks=1 00:05:26.215 00:05:26.215 ' 00:05:26.215 05:03:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.215 --rc genhtml_branch_coverage=1 00:05:26.215 --rc genhtml_function_coverage=1 00:05:26.215 --rc genhtml_legend=1 00:05:26.215 --rc geninfo_all_blocks=1 00:05:26.215 --rc geninfo_unexecuted_blocks=1 00:05:26.215 00:05:26.215 ' 00:05:26.215 05:03:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.215 --rc genhtml_branch_coverage=1 00:05:26.215 --rc genhtml_function_coverage=1 00:05:26.215 --rc genhtml_legend=1 00:05:26.215 --rc geninfo_all_blocks=1 00:05:26.215 --rc geninfo_unexecuted_blocks=1 00:05:26.215 00:05:26.215 ' 00:05:26.215 05:03:22 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:26.215 05:03:22 -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.215 05:03:22 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.215 05:03:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:26.215 05:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.215 05:03:22 -- common/autotest_common.sh@10 -- # set +x 00:05:26.215 ************************************ 00:05:26.215 START TEST event_perf 00:05:26.215 ************************************ 00:05:26.215 05:03:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.215 Running I/O for 1 seconds...[2024-11-20 05:03:22.974852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.215 [2024-11-20 05:03:22.974932] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109678 ] 00:05:26.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.474 [2024-11-20 05:03:23.047759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.474 [2024-11-20 05:03:23.117840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.474 [2024-11-20 05:03:23.117922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.474 [2024-11-20 05:03:23.118034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.474 [2024-11-20 05:03:23.118034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.413 Running I/O for 1 seconds... 00:05:27.413 lcore 0: 211885 00:05:27.413 lcore 1: 211882 00:05:27.413 lcore 2: 211883 00:05:27.413 lcore 3: 211884 00:05:27.413 done. 00:05:27.413 00:05:27.413 real 0m1.250s 00:05:27.413 user 0m4.157s 00:05:27.413 sys 0m0.090s 00:05:27.413 05:03:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.413 05:03:24 -- common/autotest_common.sh@10 -- # set +x 00:05:27.413 ************************************ 00:05:27.413 END TEST event_perf 00:05:27.413 ************************************ 00:05:27.413 05:03:24 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.413 05:03:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:27.413 05:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.413 05:03:24 -- common/autotest_common.sh@10 -- # set +x 00:05:27.413 ************************************ 00:05:27.413 START TEST event_reactor 00:05:27.413 ************************************ 00:05:27.673 05:03:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.673 [2024-11-20 05:03:24.260564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.673 [2024-11-20 05:03:24.260643] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109892 ] 00:05:27.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.673 [2024-11-20 05:03:24.330998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.673 [2024-11-20 05:03:24.398282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.053 test_start 00:05:29.053 oneshot 00:05:29.053 tick 100 00:05:29.053 tick 100 00:05:29.053 tick 250 00:05:29.053 tick 100 00:05:29.053 tick 100 00:05:29.053 tick 100 00:05:29.053 tick 250 00:05:29.053 tick 500 00:05:29.053 tick 100 00:05:29.053 tick 100 00:05:29.053 tick 250 00:05:29.053 tick 100 00:05:29.053 tick 100 00:05:29.053 test_end 00:05:29.053 00:05:29.053 real 0m1.240s 00:05:29.053 user 0m1.160s 00:05:29.053 sys 0m0.075s 00:05:29.053 05:03:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.053 05:03:25 -- common/autotest_common.sh@10 -- # set +x 00:05:29.053 ************************************ 00:05:29.053 END TEST event_reactor 00:05:29.053 ************************************ 00:05:29.053 05:03:25 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.053 05:03:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:29.053 05:03:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.053 05:03:25 -- common/autotest_common.sh@10 -- # set +x 00:05:29.053 ************************************ 00:05:29.053 START TEST event_reactor_perf 00:05:29.053 ************************************ 00:05:29.053 05:03:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.053 [2024-11-20 05:03:25.540386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.053 [2024-11-20 05:03:25.540465] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110081 ] 00:05:29.053 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.053 [2024-11-20 05:03:25.613823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.053 [2024-11-20 05:03:25.681142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.992 test_start 00:05:29.992 test_end 00:05:29.992 Performance: 519300 events per second 00:05:29.992 00:05:29.992 real 0m1.245s 00:05:29.992 user 0m1.157s 00:05:29.992 sys 0m0.083s 00:05:29.992 05:03:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.992 05:03:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.992 ************************************ 00:05:29.992 END TEST event_reactor_perf 00:05:29.992 ************************************ 00:05:29.992 05:03:26 -- event/event.sh@49 -- # uname -s 00:05:29.992 05:03:26 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.992 05:03:26 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:29.992 05:03:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.992 05:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.992 05:03:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.992 ************************************ 00:05:29.992 START TEST event_scheduler 00:05:29.992 ************************************ 00:05:29.992 05:03:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.252 * Looking for test storage... 00:05:30.252 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler 00:05:30.252 05:03:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.252 05:03:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.252 05:03:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.252 05:03:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.252 05:03:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.252 05:03:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.252 05:03:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.252 05:03:26 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.252 05:03:26 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.252 05:03:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.252 05:03:26 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.252 05:03:26 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.252 05:03:26 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.252 05:03:26 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.252 05:03:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.252 05:03:26 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.252 05:03:26 -- scripts/common.sh@344 -- # : 1 00:05:30.252 05:03:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.252 05:03:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.252 05:03:26 -- scripts/common.sh@364 -- # decimal 1 00:05:30.252 05:03:26 -- scripts/common.sh@352 -- # local d=1 00:05:30.252 05:03:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.252 05:03:26 -- scripts/common.sh@354 -- # echo 1 00:05:30.252 05:03:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.252 05:03:26 -- scripts/common.sh@365 -- # decimal 2 00:05:30.252 05:03:26 -- scripts/common.sh@352 -- # local d=2 00:05:30.252 05:03:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.252 05:03:26 -- scripts/common.sh@354 -- # echo 2 00:05:30.252 05:03:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.252 05:03:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.252 05:03:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.252 05:03:26 -- scripts/common.sh@367 -- # return 0 00:05:30.252 05:03:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.252 05:03:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.252 --rc genhtml_branch_coverage=1 00:05:30.252 --rc genhtml_function_coverage=1 00:05:30.252 --rc genhtml_legend=1 00:05:30.252 --rc geninfo_all_blocks=1 00:05:30.252 --rc geninfo_unexecuted_blocks=1 00:05:30.252 00:05:30.252 ' 00:05:30.252 05:03:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.252 --rc genhtml_branch_coverage=1 00:05:30.252 --rc genhtml_function_coverage=1 00:05:30.252 --rc genhtml_legend=1 00:05:30.252 --rc geninfo_all_blocks=1 00:05:30.252 --rc geninfo_unexecuted_blocks=1 00:05:30.252 00:05:30.253 ' 00:05:30.253 05:03:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.253 --rc genhtml_branch_coverage=1 00:05:30.253 --rc genhtml_function_coverage=1 00:05:30.253 --rc genhtml_legend=1 00:05:30.253 --rc geninfo_all_blocks=1 00:05:30.253 --rc geninfo_unexecuted_blocks=1 00:05:30.253 00:05:30.253 ' 00:05:30.253 05:03:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.253 --rc genhtml_branch_coverage=1 00:05:30.253 --rc genhtml_function_coverage=1 00:05:30.253 --rc genhtml_legend=1 00:05:30.253 --rc geninfo_all_blocks=1 00:05:30.253 --rc geninfo_unexecuted_blocks=1 00:05:30.253 00:05:30.253 ' 00:05:30.253 05:03:26 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.253 05:03:26 -- scheduler/scheduler.sh@35 -- # scheduler_pid=110409 00:05:30.253 05:03:26 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.253 05:03:26 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.253 05:03:26 -- scheduler/scheduler.sh@37 -- # waitforlisten 110409 00:05:30.253 05:03:26 -- common/autotest_common.sh@829 -- # '[' -z 110409 ']' 00:05:30.253 05:03:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.253 05:03:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.253 05:03:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.253 05:03:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.253 05:03:26 -- common/autotest_common.sh@10 -- # set +x 00:05:30.253 [2024-11-20 05:03:27.019865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.253 [2024-11-20 05:03:27.019917] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110409 ] 00:05:30.253 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.512 [2024-11-20 05:03:27.087710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.512 [2024-11-20 05:03:27.164340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.512 [2024-11-20 05:03:27.164445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.512 [2024-11-20 05:03:27.164553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.512 [2024-11-20 05:03:27.164553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.082 05:03:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.082 05:03:27 -- common/autotest_common.sh@862 -- # return 0 00:05:31.082 05:03:27 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:31.082 05:03:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.082 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.082 POWER: Env isn't set yet! 00:05:31.082 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:31.082 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.082 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.082 POWER: Attempting to initialise PSTAT power management... 00:05:31.082 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:31.082 POWER: Initialized successfully for lcore 0 power management 00:05:31.082 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:31.082 POWER: Initialized successfully for lcore 1 power management 00:05:31.082 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:31.082 POWER: Initialized successfully for lcore 2 power management 00:05:31.082 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:31.082 POWER: Initialized successfully for lcore 3 power management 00:05:31.082 [2024-11-20 05:03:27.875244] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:31.082 [2024-11-20 05:03:27.875258] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:31.082 [2024-11-20 05:03:27.875266] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:31.082 05:03:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.082 05:03:27 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:31.082 05:03:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.082 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 [2024-11-20 05:03:27.942589] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:31.342 05:03:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:27 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:31.342 05:03:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.342 05:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.342 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 ************************************ 00:05:31.342 START TEST scheduler_create_thread 00:05:31.342 ************************************ 00:05:31.342 05:03:27 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:31.342 05:03:27 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:31.342 05:03:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 2 00:05:31.342 05:03:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:27 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:31.342 05:03:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 3 00:05:31.342 05:03:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:27 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:31.342 05:03:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 4 00:05:31.342 05:03:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:27 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:31.342 05:03:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 5 00:05:31.342 05:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:28 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:31.342 05:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 6 00:05:31.342 05:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:28 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:31.342 05:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 7 00:05:31.342 05:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:28 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:31.342 05:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 8 00:05:31.342 05:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:28 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:31.342 05:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 9 00:05:31.342 05:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:28 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:31.342 05:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 10 00:05:31.342 05:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.342 05:03:28 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.342 05:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.342 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:31.342 05:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.343 05:03:28 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.343 05:03:28 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.343 05:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.343 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:32.280 05:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.280 05:03:28 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:32.280 05:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.280 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.656 05:03:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.656 05:03:30 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.656 05:03:30 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.656 05:03:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.656 05:03:30 -- common/autotest_common.sh@10 -- # set +x 00:05:34.596 05:03:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.596 00:05:34.596 real 0m3.384s 00:05:34.596 user 0m0.024s 00:05:34.596 sys 0m0.005s 00:05:34.596 05:03:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.596 05:03:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.596 ************************************ 00:05:34.596 END TEST scheduler_create_thread 00:05:34.596 ************************************ 00:05:34.596 05:03:31 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.596 05:03:31 -- scheduler/scheduler.sh@46 -- # killprocess 110409 00:05:34.596 05:03:31 -- common/autotest_common.sh@936 -- # '[' -z 110409 ']' 00:05:34.596 05:03:31 -- common/autotest_common.sh@940 -- # kill -0 110409 00:05:34.596 05:03:31 -- common/autotest_common.sh@941 -- # uname 00:05:34.596 05:03:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.596 05:03:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110409 00:05:34.596 05:03:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:34.855 05:03:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:34.855 05:03:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110409' 00:05:34.855 killing process with pid 110409 00:05:34.855 05:03:31 -- common/autotest_common.sh@955 -- # kill 110409 00:05:34.855 05:03:31 -- common/autotest_common.sh@960 -- # wait 110409 00:05:35.115 [2024-11-20 05:03:31.714641] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:35.115 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:35.115 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:35.115 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:35.115 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:35.115 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:35.115 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:35.115 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:35.115 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:35.374 00:05:35.374 real 0m5.158s 00:05:35.374 user 0m10.533s 00:05:35.374 sys 0m0.380s 00:05:35.374 05:03:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.374 05:03:31 -- common/autotest_common.sh@10 -- # set +x 00:05:35.374 ************************************ 00:05:35.374 END TEST event_scheduler 00:05:35.374 ************************************ 00:05:35.374 05:03:31 -- event/event.sh@51 -- # modprobe -n nbd 00:05:35.374 05:03:32 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:35.374 05:03:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.374 05:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.374 05:03:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.374 ************************************ 00:05:35.374 START TEST app_repeat 00:05:35.374 ************************************ 00:05:35.374 05:03:32 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:35.374 05:03:32 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.374 05:03:32 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.374 05:03:32 -- event/event.sh@13 -- # local nbd_list 00:05:35.374 05:03:32 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.374 05:03:32 -- event/event.sh@14 -- # local bdev_list 00:05:35.374 05:03:32 -- event/event.sh@15 -- # local repeat_times=4 00:05:35.374 05:03:32 -- event/event.sh@17 -- # modprobe nbd 00:05:35.374 05:03:32 -- event/event.sh@19 -- # repeat_pid=111213 00:05:35.374 05:03:32 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:35.374 05:03:32 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.374 05:03:32 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 111213' 00:05:35.374 Process app_repeat pid: 111213 00:05:35.374 05:03:32 -- event/event.sh@23 -- # for i in {0..2} 00:05:35.374 05:03:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:35.374 spdk_app_start Round 0 00:05:35.374 05:03:32 -- event/event.sh@25 -- # waitforlisten 111213 /var/tmp/spdk-nbd.sock 00:05:35.374 05:03:32 -- common/autotest_common.sh@829 -- # '[' -z 111213 ']' 00:05:35.374 05:03:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.374 05:03:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.374 05:03:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.374 05:03:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.374 05:03:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.374 [2024-11-20 05:03:32.044160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.374 [2024-11-20 05:03:32.044221] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111213 ] 00:05:35.374 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.374 [2024-11-20 05:03:32.102944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.374 [2024-11-20 05:03:32.171355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.374 [2024-11-20 05:03:32.171358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.313 05:03:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.313 05:03:32 -- common/autotest_common.sh@862 -- # return 0 00:05:36.313 05:03:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.313 Malloc0 00:05:36.313 05:03:33 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.572 Malloc1 00:05:36.572 05:03:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@12 -- # local i 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.572 /dev/nbd0 00:05:36.572 05:03:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.832 05:03:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.832 05:03:33 -- common/autotest_common.sh@867 -- # local i 00:05:36.832 05:03:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.832 05:03:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.832 05:03:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.832 05:03:33 -- common/autotest_common.sh@871 -- # break 00:05:36.832 05:03:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.832 05:03:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.832 05:03:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.832 1+0 records in 00:05:36.832 1+0 records out 00:05:36.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193215 s, 21.2 MB/s 00:05:36.832 05:03:33 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:36.832 05:03:33 -- common/autotest_common.sh@884 -- # size=4096 00:05:36.832 05:03:33 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:36.832 05:03:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.832 05:03:33 -- common/autotest_common.sh@887 -- # return 0 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.832 /dev/nbd1 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.832 05:03:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.832 05:03:33 -- common/autotest_common.sh@867 -- # local i 00:05:36.832 05:03:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.832 05:03:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.832 05:03:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.832 05:03:33 -- common/autotest_common.sh@871 -- # break 00:05:36.832 05:03:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.832 05:03:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.832 05:03:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.832 1+0 records in 00:05:36.832 1+0 records out 00:05:36.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193379 s, 21.2 MB/s 00:05:36.832 05:03:33 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:36.832 05:03:33 -- common/autotest_common.sh@884 -- # size=4096 00:05:36.832 05:03:33 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:36.832 05:03:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.832 05:03:33 -- common/autotest_common.sh@887 -- # return 0 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.832 05:03:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.092 { 00:05:37.092 "nbd_device": "/dev/nbd0", 00:05:37.092 "bdev_name": "Malloc0" 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "nbd_device": "/dev/nbd1", 00:05:37.092 "bdev_name": "Malloc1" 00:05:37.092 } 00:05:37.092 ]' 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.092 { 00:05:37.092 "nbd_device": "/dev/nbd0", 00:05:37.092 "bdev_name": "Malloc0" 00:05:37.092 }, 00:05:37.092 { 00:05:37.092 "nbd_device": "/dev/nbd1", 00:05:37.092 "bdev_name": "Malloc1" 00:05:37.092 } 00:05:37.092 ]' 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.092 /dev/nbd1' 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.092 /dev/nbd1' 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.092 256+0 records in 00:05:37.092 256+0 records out 00:05:37.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106051 s, 98.9 MB/s 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.092 256+0 records in 00:05:37.092 256+0 records out 00:05:37.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142022 s, 73.8 MB/s 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.092 05:03:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.351 256+0 records in 00:05:37.351 256+0 records out 00:05:37.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151374 s, 69.3 MB/s 00:05:37.351 05:03:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.351 05:03:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.351 05:03:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.351 05:03:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@51 -- # local i 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.352 05:03:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@41 -- # break 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.352 05:03:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@41 -- # break 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.610 05:03:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@65 -- # true 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.869 05:03:34 -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.869 05:03:34 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.128 05:03:34 -- event/event.sh@35 -- # sleep 3 00:05:38.386 [2024-11-20 05:03:34.969429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.386 [2024-11-20 05:03:35.033241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.386 [2024-11-20 05:03:35.033243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.386 [2024-11-20 05:03:35.074066] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.386 [2024-11-20 05:03:35.074112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.676 05:03:37 -- event/event.sh@23 -- # for i in {0..2} 00:05:41.676 05:03:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:41.676 spdk_app_start Round 1 00:05:41.676 05:03:37 -- event/event.sh@25 -- # waitforlisten 111213 /var/tmp/spdk-nbd.sock 00:05:41.676 05:03:37 -- common/autotest_common.sh@829 -- # '[' -z 111213 ']' 00:05:41.676 05:03:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.676 05:03:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.676 05:03:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.676 05:03:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.676 05:03:37 -- common/autotest_common.sh@10 -- # set +x 00:05:41.676 05:03:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.676 05:03:37 -- common/autotest_common.sh@862 -- # return 0 00:05:41.676 05:03:37 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.676 Malloc0 00:05:41.676 05:03:38 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.676 Malloc1 00:05:41.676 05:03:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@12 -- # local i 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.676 05:03:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.676 /dev/nbd0 00:05:41.935 05:03:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.935 05:03:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.935 05:03:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:41.935 05:03:38 -- common/autotest_common.sh@867 -- # local i 00:05:41.935 05:03:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.935 05:03:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.935 05:03:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:41.935 05:03:38 -- common/autotest_common.sh@871 -- # break 00:05:41.935 05:03:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.935 05:03:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.935 05:03:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.935 1+0 records in 00:05:41.935 1+0 records out 00:05:41.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214818 s, 19.1 MB/s 00:05:41.935 05:03:38 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:41.935 05:03:38 -- common/autotest_common.sh@884 -- # size=4096 00:05:41.935 05:03:38 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:41.935 05:03:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.935 05:03:38 -- common/autotest_common.sh@887 -- # return 0 00:05:41.935 05:03:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.935 05:03:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.935 05:03:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.935 /dev/nbd1 00:05:41.935 05:03:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.935 05:03:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.935 05:03:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:41.935 05:03:38 -- common/autotest_common.sh@867 -- # local i 00:05:41.935 05:03:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.935 05:03:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.935 05:03:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:41.935 05:03:38 -- common/autotest_common.sh@871 -- # break 00:05:41.935 05:03:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.935 05:03:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.935 05:03:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.935 1+0 records in 00:05:41.935 1+0 records out 00:05:41.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203412 s, 20.1 MB/s 00:05:41.935 05:03:38 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:41.935 05:03:38 -- common/autotest_common.sh@884 -- # size=4096 00:05:41.935 05:03:38 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:41.935 05:03:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.935 05:03:38 -- common/autotest_common.sh@887 -- # return 0 00:05:41.936 05:03:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.936 05:03:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.936 05:03:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.936 05:03:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.936 05:03:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.194 { 00:05:42.194 "nbd_device": "/dev/nbd0", 00:05:42.194 "bdev_name": "Malloc0" 00:05:42.194 }, 00:05:42.194 { 00:05:42.194 "nbd_device": "/dev/nbd1", 00:05:42.194 "bdev_name": "Malloc1" 00:05:42.194 } 00:05:42.194 ]' 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.194 { 00:05:42.194 "nbd_device": "/dev/nbd0", 00:05:42.194 "bdev_name": "Malloc0" 00:05:42.194 }, 00:05:42.194 { 00:05:42.194 "nbd_device": "/dev/nbd1", 00:05:42.194 "bdev_name": "Malloc1" 00:05:42.194 } 00:05:42.194 ]' 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.194 /dev/nbd1' 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.194 /dev/nbd1' 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.194 256+0 records in 00:05:42.194 256+0 records out 00:05:42.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102695 s, 102 MB/s 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.194 256+0 records in 00:05:42.194 256+0 records out 00:05:42.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145903 s, 71.9 MB/s 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.194 05:03:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.194 256+0 records in 00:05:42.194 256+0 records out 00:05:42.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014682 s, 71.4 MB/s 00:05:42.194 05:03:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.194 05:03:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@51 -- # local i 00:05:42.195 05:03:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@41 -- # break 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.453 05:03:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@41 -- # break 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.712 05:03:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@65 -- # true 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.970 05:03:39 -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.970 05:03:39 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.229 05:03:39 -- event/event.sh@35 -- # sleep 3 00:05:43.488 [2024-11-20 05:03:40.063948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.488 [2024-11-20 05:03:40.131276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.488 [2024-11-20 05:03:40.131277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.488 [2024-11-20 05:03:40.172301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.488 [2024-11-20 05:03:40.172342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.778 05:03:42 -- event/event.sh@23 -- # for i in {0..2} 00:05:46.778 05:03:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.778 spdk_app_start Round 2 00:05:46.778 05:03:42 -- event/event.sh@25 -- # waitforlisten 111213 /var/tmp/spdk-nbd.sock 00:05:46.778 05:03:42 -- common/autotest_common.sh@829 -- # '[' -z 111213 ']' 00:05:46.778 05:03:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.778 05:03:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.778 05:03:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.778 05:03:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.778 05:03:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.778 05:03:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.778 05:03:43 -- common/autotest_common.sh@862 -- # return 0 00:05:46.778 05:03:43 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.778 Malloc0 00:05:46.778 05:03:43 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.778 Malloc1 00:05:46.778 05:03:43 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.778 05:03:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.778 05:03:43 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.778 05:03:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.778 05:03:43 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.778 05:03:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.778 05:03:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.778 05:03:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.779 05:03:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.779 05:03:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.779 05:03:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.779 05:03:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.779 05:03:43 -- bdev/nbd_common.sh@12 -- # local i 00:05:46.779 05:03:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.779 05:03:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.779 05:03:43 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.779 /dev/nbd0 00:05:47.038 05:03:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.038 05:03:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.038 05:03:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:47.038 05:03:43 -- common/autotest_common.sh@867 -- # local i 00:05:47.038 05:03:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.038 05:03:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.038 05:03:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:47.038 05:03:43 -- common/autotest_common.sh@871 -- # break 00:05:47.038 05:03:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.038 05:03:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.038 05:03:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.038 1+0 records in 00:05:47.038 1+0 records out 00:05:47.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182527 s, 22.4 MB/s 00:05:47.038 05:03:43 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:47.038 05:03:43 -- common/autotest_common.sh@884 -- # size=4096 00:05:47.038 05:03:43 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:47.038 05:03:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.038 05:03:43 -- common/autotest_common.sh@887 -- # return 0 00:05:47.038 05:03:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.038 05:03:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.038 05:03:43 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.038 /dev/nbd1 00:05:47.038 05:03:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.038 05:03:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.038 05:03:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:47.038 05:03:43 -- common/autotest_common.sh@867 -- # local i 00:05:47.038 05:03:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.039 05:03:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.039 05:03:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:47.039 05:03:43 -- common/autotest_common.sh@871 -- # break 00:05:47.039 05:03:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.039 05:03:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.039 05:03:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.039 1+0 records in 00:05:47.039 1+0 records out 00:05:47.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204114 s, 20.1 MB/s 00:05:47.039 05:03:43 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:47.039 05:03:43 -- common/autotest_common.sh@884 -- # size=4096 00:05:47.039 05:03:43 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:47.039 05:03:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.039 05:03:43 -- common/autotest_common.sh@887 -- # return 0 00:05:47.039 05:03:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.039 05:03:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.039 05:03:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.039 05:03:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.039 05:03:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.298 { 00:05:47.298 "nbd_device": "/dev/nbd0", 00:05:47.298 "bdev_name": "Malloc0" 00:05:47.298 }, 00:05:47.298 { 00:05:47.298 "nbd_device": "/dev/nbd1", 00:05:47.298 "bdev_name": "Malloc1" 00:05:47.298 } 00:05:47.298 ]' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.298 { 00:05:47.298 "nbd_device": "/dev/nbd0", 00:05:47.298 "bdev_name": "Malloc0" 00:05:47.298 }, 00:05:47.298 { 00:05:47.298 "nbd_device": "/dev/nbd1", 00:05:47.298 "bdev_name": "Malloc1" 00:05:47.298 } 00:05:47.298 ]' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.298 /dev/nbd1' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.298 /dev/nbd1' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.298 256+0 records in 00:05:47.298 256+0 records out 00:05:47.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100478 s, 104 MB/s 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.298 256+0 records in 00:05:47.298 256+0 records out 00:05:47.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142992 s, 73.3 MB/s 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.298 256+0 records in 00:05:47.298 256+0 records out 00:05:47.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153183 s, 68.5 MB/s 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@51 -- # local i 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.298 05:03:44 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@41 -- # break 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.558 05:03:44 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@41 -- # break 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.817 05:03:44 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@65 -- # true 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.076 05:03:44 -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.076 05:03:44 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.335 05:03:44 -- event/event.sh@35 -- # sleep 3 00:05:48.335 [2024-11-20 05:03:45.159807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.594 [2024-11-20 05:03:45.222580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.594 [2024-11-20 05:03:45.222583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.594 [2024-11-20 05:03:45.263337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.594 [2024-11-20 05:03:45.263377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.133 05:03:47 -- event/event.sh@38 -- # waitforlisten 111213 /var/tmp/spdk-nbd.sock 00:05:51.133 05:03:47 -- common/autotest_common.sh@829 -- # '[' -z 111213 ']' 00:05:51.133 05:03:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.133 05:03:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.133 05:03:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.133 05:03:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.133 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.393 05:03:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.393 05:03:48 -- common/autotest_common.sh@862 -- # return 0 00:05:51.393 05:03:48 -- event/event.sh@39 -- # killprocess 111213 00:05:51.393 05:03:48 -- common/autotest_common.sh@936 -- # '[' -z 111213 ']' 00:05:51.393 05:03:48 -- common/autotest_common.sh@940 -- # kill -0 111213 00:05:51.393 05:03:48 -- common/autotest_common.sh@941 -- # uname 00:05:51.393 05:03:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.393 05:03:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111213 00:05:51.393 05:03:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.393 05:03:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.393 05:03:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111213' 00:05:51.393 killing process with pid 111213 00:05:51.393 05:03:48 -- common/autotest_common.sh@955 -- # kill 111213 00:05:51.393 05:03:48 -- common/autotest_common.sh@960 -- # wait 111213 00:05:51.652 spdk_app_start is called in Round 0. 00:05:51.652 Shutdown signal received, stop current app iteration 00:05:51.652 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:51.652 spdk_app_start is called in Round 1. 00:05:51.652 Shutdown signal received, stop current app iteration 00:05:51.652 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:51.652 spdk_app_start is called in Round 2. 00:05:51.652 Shutdown signal received, stop current app iteration 00:05:51.652 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:51.652 spdk_app_start is called in Round 3. 00:05:51.652 Shutdown signal received, stop current app iteration 00:05:51.652 05:03:48 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.652 05:03:48 -- event/event.sh@42 -- # return 0 00:05:51.652 00:05:51.652 real 0m16.365s 00:05:51.652 user 0m35.358s 00:05:51.652 sys 0m2.456s 00:05:51.652 05:03:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.652 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.653 ************************************ 00:05:51.653 END TEST app_repeat 00:05:51.653 ************************************ 00:05:51.653 05:03:48 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.653 05:03:48 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.653 05:03:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.653 05:03:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.653 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.653 ************************************ 00:05:51.653 START TEST cpu_locks 00:05:51.653 ************************************ 00:05:51.653 05:03:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.912 * Looking for test storage... 00:05:51.912 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:05:51.912 05:03:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:51.912 05:03:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:51.912 05:03:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:51.912 05:03:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:51.912 05:03:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:51.912 05:03:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:51.912 05:03:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:51.912 05:03:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:51.912 05:03:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:51.912 05:03:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.912 05:03:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:51.912 05:03:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:51.912 05:03:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:51.912 05:03:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:51.912 05:03:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:51.912 05:03:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:51.912 05:03:48 -- scripts/common.sh@344 -- # : 1 00:05:51.912 05:03:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:51.912 05:03:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.912 05:03:48 -- scripts/common.sh@364 -- # decimal 1 00:05:51.912 05:03:48 -- scripts/common.sh@352 -- # local d=1 00:05:51.912 05:03:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.912 05:03:48 -- scripts/common.sh@354 -- # echo 1 00:05:51.912 05:03:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:51.912 05:03:48 -- scripts/common.sh@365 -- # decimal 2 00:05:51.912 05:03:48 -- scripts/common.sh@352 -- # local d=2 00:05:51.912 05:03:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.912 05:03:48 -- scripts/common.sh@354 -- # echo 2 00:05:51.912 05:03:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:51.912 05:03:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:51.912 05:03:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:51.912 05:03:48 -- scripts/common.sh@367 -- # return 0 00:05:51.912 05:03:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.912 05:03:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:51.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.912 --rc genhtml_branch_coverage=1 00:05:51.912 --rc genhtml_function_coverage=1 00:05:51.912 --rc genhtml_legend=1 00:05:51.912 --rc geninfo_all_blocks=1 00:05:51.912 --rc geninfo_unexecuted_blocks=1 00:05:51.912 00:05:51.912 ' 00:05:51.912 05:03:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:51.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.912 --rc genhtml_branch_coverage=1 00:05:51.912 --rc genhtml_function_coverage=1 00:05:51.912 --rc genhtml_legend=1 00:05:51.912 --rc geninfo_all_blocks=1 00:05:51.912 --rc geninfo_unexecuted_blocks=1 00:05:51.912 00:05:51.912 ' 00:05:51.912 05:03:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:51.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.912 --rc genhtml_branch_coverage=1 00:05:51.912 --rc genhtml_function_coverage=1 00:05:51.912 --rc genhtml_legend=1 00:05:51.912 --rc geninfo_all_blocks=1 00:05:51.912 --rc geninfo_unexecuted_blocks=1 00:05:51.913 00:05:51.913 ' 00:05:51.913 05:03:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:51.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.913 --rc genhtml_branch_coverage=1 00:05:51.913 --rc genhtml_function_coverage=1 00:05:51.913 --rc genhtml_legend=1 00:05:51.913 --rc geninfo_all_blocks=1 00:05:51.913 --rc geninfo_unexecuted_blocks=1 00:05:51.913 00:05:51.913 ' 00:05:51.913 05:03:48 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.913 05:03:48 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.913 05:03:48 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.913 05:03:48 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.913 05:03:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.913 05:03:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.913 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.913 ************************************ 00:05:51.913 START TEST default_locks 00:05:51.913 ************************************ 00:05:51.913 05:03:48 -- common/autotest_common.sh@1114 -- # default_locks 00:05:51.913 05:03:48 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=114225 00:05:51.913 05:03:48 -- event/cpu_locks.sh@47 -- # waitforlisten 114225 00:05:51.913 05:03:48 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.913 05:03:48 -- common/autotest_common.sh@829 -- # '[' -z 114225 ']' 00:05:51.913 05:03:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.913 05:03:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.913 05:03:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.913 05:03:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.913 05:03:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.913 [2024-11-20 05:03:48.637208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.913 [2024-11-20 05:03:48.637257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114225 ] 00:05:51.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.913 [2024-11-20 05:03:48.691895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.172 [2024-11-20 05:03:48.760299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.172 [2024-11-20 05:03:48.760435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.740 05:03:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.740 05:03:49 -- common/autotest_common.sh@862 -- # return 0 00:05:52.740 05:03:49 -- event/cpu_locks.sh@49 -- # locks_exist 114225 00:05:52.740 05:03:49 -- event/cpu_locks.sh@22 -- # lslocks -p 114225 00:05:52.740 05:03:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.999 lslocks: write error 00:05:52.999 05:03:49 -- event/cpu_locks.sh@50 -- # killprocess 114225 00:05:52.999 05:03:49 -- common/autotest_common.sh@936 -- # '[' -z 114225 ']' 00:05:52.999 05:03:49 -- common/autotest_common.sh@940 -- # kill -0 114225 00:05:52.999 05:03:49 -- common/autotest_common.sh@941 -- # uname 00:05:52.999 05:03:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.999 05:03:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114225 00:05:52.999 05:03:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.999 05:03:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.999 05:03:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114225' 00:05:52.999 killing process with pid 114225 00:05:52.999 05:03:49 -- common/autotest_common.sh@955 -- # kill 114225 00:05:52.999 05:03:49 -- common/autotest_common.sh@960 -- # wait 114225 00:05:53.259 05:03:50 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 114225 00:05:53.259 05:03:50 -- common/autotest_common.sh@650 -- # local es=0 00:05:53.259 05:03:50 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 114225 00:05:53.259 05:03:50 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.259 05:03:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.259 05:03:50 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.259 05:03:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.259 05:03:50 -- common/autotest_common.sh@653 -- # waitforlisten 114225 00:05:53.259 05:03:50 -- common/autotest_common.sh@829 -- # '[' -z 114225 ']' 00:05:53.259 05:03:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.259 05:03:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.259 05:03:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.259 05:03:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.259 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.259 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (114225) - No such process 00:05:53.259 ERROR: process (pid: 114225) is no longer running 00:05:53.259 05:03:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.259 05:03:50 -- common/autotest_common.sh@862 -- # return 1 00:05:53.259 05:03:50 -- common/autotest_common.sh@653 -- # es=1 00:05:53.259 05:03:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.259 05:03:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.259 05:03:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.259 05:03:50 -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.259 05:03:50 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.259 05:03:50 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.259 05:03:50 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.259 00:05:53.259 real 0m1.451s 00:05:53.259 user 0m1.533s 00:05:53.259 sys 0m0.454s 00:05:53.259 05:03:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.259 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.259 ************************************ 00:05:53.259 END TEST default_locks 00:05:53.259 ************************************ 00:05:53.259 05:03:50 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.259 05:03:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.259 05:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.259 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.259 ************************************ 00:05:53.259 START TEST default_locks_via_rpc 00:05:53.259 ************************************ 00:05:53.259 05:03:50 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:53.259 05:03:50 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=114568 00:05:53.259 05:03:50 -- event/cpu_locks.sh@63 -- # waitforlisten 114568 00:05:53.259 05:03:50 -- common/autotest_common.sh@829 -- # '[' -z 114568 ']' 00:05:53.259 05:03:50 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.259 05:03:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.259 05:03:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.259 05:03:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.259 05:03:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.259 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 [2024-11-20 05:03:50.127498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.518 [2024-11-20 05:03:50.127543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114568 ] 00:05:53.518 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.518 [2024-11-20 05:03:50.184202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.518 [2024-11-20 05:03:50.258485] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.518 [2024-11-20 05:03:50.258604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.457 05:03:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.457 05:03:50 -- common/autotest_common.sh@862 -- # return 0 00:05:54.457 05:03:50 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.457 05:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.457 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:54.457 05:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.457 05:03:50 -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.457 05:03:50 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.457 05:03:50 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.457 05:03:50 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.457 05:03:50 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.457 05:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.457 05:03:50 -- common/autotest_common.sh@10 -- # set +x 00:05:54.457 05:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.457 05:03:50 -- event/cpu_locks.sh@71 -- # locks_exist 114568 00:05:54.457 05:03:50 -- event/cpu_locks.sh@22 -- # lslocks -p 114568 00:05:54.457 05:03:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.717 05:03:51 -- event/cpu_locks.sh@73 -- # killprocess 114568 00:05:54.717 05:03:51 -- common/autotest_common.sh@936 -- # '[' -z 114568 ']' 00:05:54.717 05:03:51 -- common/autotest_common.sh@940 -- # kill -0 114568 00:05:54.717 05:03:51 -- common/autotest_common.sh@941 -- # uname 00:05:54.717 05:03:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.717 05:03:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114568 00:05:54.717 05:03:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.717 05:03:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.717 05:03:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114568' 00:05:54.717 killing process with pid 114568 00:05:54.717 05:03:51 -- common/autotest_common.sh@955 -- # kill 114568 00:05:54.717 05:03:51 -- common/autotest_common.sh@960 -- # wait 114568 00:05:54.976 00:05:54.976 real 0m1.661s 00:05:54.976 user 0m1.763s 00:05:54.976 sys 0m0.533s 00:05:54.976 05:03:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.976 05:03:51 -- common/autotest_common.sh@10 -- # set +x 00:05:54.976 ************************************ 00:05:54.976 END TEST default_locks_via_rpc 00:05:54.976 ************************************ 00:05:54.976 05:03:51 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.976 05:03:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.976 05:03:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.976 05:03:51 -- common/autotest_common.sh@10 -- # set +x 00:05:54.976 ************************************ 00:05:54.976 START TEST non_locking_app_on_locked_coremask 00:05:54.976 ************************************ 00:05:54.976 05:03:51 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:54.976 05:03:51 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=114963 00:05:54.976 05:03:51 -- event/cpu_locks.sh@81 -- # waitforlisten 114963 /var/tmp/spdk.sock 00:05:54.976 05:03:51 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.976 05:03:51 -- common/autotest_common.sh@829 -- # '[' -z 114963 ']' 00:05:54.976 05:03:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.976 05:03:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.976 05:03:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.976 05:03:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.977 05:03:51 -- common/autotest_common.sh@10 -- # set +x 00:05:55.236 [2024-11-20 05:03:51.826825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.236 [2024-11-20 05:03:51.826875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114963 ] 00:05:55.236 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.236 [2024-11-20 05:03:51.880780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.236 [2024-11-20 05:03:51.955418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.236 [2024-11-20 05:03:51.955534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.172 05:03:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.172 05:03:52 -- common/autotest_common.sh@862 -- # return 0 00:05:56.172 05:03:52 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:56.172 05:03:52 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=114969 00:05:56.172 05:03:52 -- event/cpu_locks.sh@85 -- # waitforlisten 114969 /var/tmp/spdk2.sock 00:05:56.172 05:03:52 -- common/autotest_common.sh@829 -- # '[' -z 114969 ']' 00:05:56.172 05:03:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.172 05:03:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.172 05:03:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.172 05:03:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.172 05:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:56.172 [2024-11-20 05:03:52.664086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.172 [2024-11-20 05:03:52.664135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114969 ] 00:05:56.172 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.172 [2024-11-20 05:03:52.741089] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.172 [2024-11-20 05:03:52.741116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.172 [2024-11-20 05:03:52.886033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.172 [2024-11-20 05:03:52.886168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.741 05:03:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.741 05:03:53 -- common/autotest_common.sh@862 -- # return 0 00:05:56.741 05:03:53 -- event/cpu_locks.sh@87 -- # locks_exist 114963 00:05:56.741 05:03:53 -- event/cpu_locks.sh@22 -- # lslocks -p 114963 00:05:56.741 05:03:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.309 lslocks: write error 00:05:57.309 05:03:54 -- event/cpu_locks.sh@89 -- # killprocess 114963 00:05:57.309 05:03:54 -- common/autotest_common.sh@936 -- # '[' -z 114963 ']' 00:05:57.309 05:03:54 -- common/autotest_common.sh@940 -- # kill -0 114963 00:05:57.309 05:03:54 -- common/autotest_common.sh@941 -- # uname 00:05:57.309 05:03:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.309 05:03:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114963 00:05:57.309 05:03:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.309 05:03:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.309 05:03:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114963' 00:05:57.309 killing process with pid 114963 00:05:57.309 05:03:54 -- common/autotest_common.sh@955 -- # kill 114963 00:05:57.309 05:03:54 -- common/autotest_common.sh@960 -- # wait 114963 00:05:58.246 05:03:54 -- event/cpu_locks.sh@90 -- # killprocess 114969 00:05:58.246 05:03:54 -- common/autotest_common.sh@936 -- # '[' -z 114969 ']' 00:05:58.246 05:03:54 -- common/autotest_common.sh@940 -- # kill -0 114969 00:05:58.246 05:03:54 -- common/autotest_common.sh@941 -- # uname 00:05:58.246 05:03:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.246 05:03:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114969 00:05:58.246 05:03:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.246 05:03:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.246 05:03:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114969' 00:05:58.246 killing process with pid 114969 00:05:58.246 05:03:54 -- common/autotest_common.sh@955 -- # kill 114969 00:05:58.246 05:03:54 -- common/autotest_common.sh@960 -- # wait 114969 00:05:58.505 00:05:58.505 real 0m3.367s 00:05:58.505 user 0m3.618s 00:05:58.505 sys 0m0.954s 00:05:58.505 05:03:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.505 05:03:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.505 ************************************ 00:05:58.505 END TEST non_locking_app_on_locked_coremask 00:05:58.505 ************************************ 00:05:58.505 05:03:55 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:58.505 05:03:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.505 05:03:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.505 05:03:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.505 ************************************ 00:05:58.505 START TEST locking_app_on_unlocked_coremask 00:05:58.505 ************************************ 00:05:58.505 05:03:55 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:58.505 05:03:55 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=115466 00:05:58.505 05:03:55 -- event/cpu_locks.sh@99 -- # waitforlisten 115466 /var/tmp/spdk.sock 00:05:58.505 05:03:55 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:58.505 05:03:55 -- common/autotest_common.sh@829 -- # '[' -z 115466 ']' 00:05:58.505 05:03:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.505 05:03:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.505 05:03:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.505 05:03:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.505 05:03:55 -- common/autotest_common.sh@10 -- # set +x 00:05:58.505 [2024-11-20 05:03:55.236852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.506 [2024-11-20 05:03:55.236898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115466 ] 00:05:58.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.506 [2024-11-20 05:03:55.291009] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.506 [2024-11-20 05:03:55.291040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.765 [2024-11-20 05:03:55.354092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.765 [2024-11-20 05:03:55.354222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.331 05:03:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.331 05:03:56 -- common/autotest_common.sh@862 -- # return 0 00:05:59.331 05:03:56 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.331 05:03:56 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=115696 00:05:59.331 05:03:56 -- event/cpu_locks.sh@103 -- # waitforlisten 115696 /var/tmp/spdk2.sock 00:05:59.331 05:03:56 -- common/autotest_common.sh@829 -- # '[' -z 115696 ']' 00:05:59.331 05:03:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.331 05:03:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.331 05:03:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.331 05:03:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.331 05:03:56 -- common/autotest_common.sh@10 -- # set +x 00:05:59.331 [2024-11-20 05:03:56.066710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.331 [2024-11-20 05:03:56.066755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115696 ] 00:05:59.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.331 [2024-11-20 05:03:56.139483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.591 [2024-11-20 05:03:56.276155] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.591 [2024-11-20 05:03:56.276278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.158 05:03:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.158 05:03:56 -- common/autotest_common.sh@862 -- # return 0 00:06:00.158 05:03:56 -- event/cpu_locks.sh@105 -- # locks_exist 115696 00:06:00.158 05:03:56 -- event/cpu_locks.sh@22 -- # lslocks -p 115696 00:06:00.158 05:03:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.417 lslocks: write error 00:06:00.417 05:03:57 -- event/cpu_locks.sh@107 -- # killprocess 115466 00:06:00.417 05:03:57 -- common/autotest_common.sh@936 -- # '[' -z 115466 ']' 00:06:00.417 05:03:57 -- common/autotest_common.sh@940 -- # kill -0 115466 00:06:00.417 05:03:57 -- common/autotest_common.sh@941 -- # uname 00:06:00.417 05:03:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.417 05:03:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115466 00:06:00.417 05:03:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.417 05:03:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.417 05:03:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115466' 00:06:00.417 killing process with pid 115466 00:06:00.417 05:03:57 -- common/autotest_common.sh@955 -- # kill 115466 00:06:00.417 05:03:57 -- common/autotest_common.sh@960 -- # wait 115466 00:06:01.355 05:03:57 -- event/cpu_locks.sh@108 -- # killprocess 115696 00:06:01.355 05:03:57 -- common/autotest_common.sh@936 -- # '[' -z 115696 ']' 00:06:01.355 05:03:57 -- common/autotest_common.sh@940 -- # kill -0 115696 00:06:01.355 05:03:57 -- common/autotest_common.sh@941 -- # uname 00:06:01.355 05:03:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.355 05:03:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115696 00:06:01.355 05:03:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.355 05:03:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.355 05:03:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115696' 00:06:01.355 killing process with pid 115696 00:06:01.355 05:03:57 -- common/autotest_common.sh@955 -- # kill 115696 00:06:01.355 05:03:57 -- common/autotest_common.sh@960 -- # wait 115696 00:06:01.614 00:06:01.614 real 0m3.082s 00:06:01.614 user 0m3.312s 00:06:01.614 sys 0m0.820s 00:06:01.614 05:03:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.614 05:03:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.614 ************************************ 00:06:01.614 END TEST locking_app_on_unlocked_coremask 00:06:01.614 ************************************ 00:06:01.614 05:03:58 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:01.614 05:03:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.614 05:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.614 05:03:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.614 ************************************ 00:06:01.614 START TEST locking_app_on_locked_coremask 00:06:01.614 ************************************ 00:06:01.614 05:03:58 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:01.614 05:03:58 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=116083 00:06:01.614 05:03:58 -- event/cpu_locks.sh@116 -- # waitforlisten 116083 /var/tmp/spdk.sock 00:06:01.614 05:03:58 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.614 05:03:58 -- common/autotest_common.sh@829 -- # '[' -z 116083 ']' 00:06:01.614 05:03:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.614 05:03:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.614 05:03:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.614 05:03:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.614 05:03:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.614 [2024-11-20 05:03:58.355847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.614 [2024-11-20 05:03:58.355900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116083 ] 00:06:01.614 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.614 [2024-11-20 05:03:58.410829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.874 [2024-11-20 05:03:58.486159] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.874 [2024-11-20 05:03:58.486273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.442 05:03:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.442 05:03:59 -- common/autotest_common.sh@862 -- # return 0 00:06:02.442 05:03:59 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=116200 00:06:02.442 05:03:59 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 116200 /var/tmp/spdk2.sock 00:06:02.442 05:03:59 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.442 05:03:59 -- common/autotest_common.sh@650 -- # local es=0 00:06:02.442 05:03:59 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 116200 /var/tmp/spdk2.sock 00:06:02.442 05:03:59 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:02.442 05:03:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.442 05:03:59 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:02.442 05:03:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.442 05:03:59 -- common/autotest_common.sh@653 -- # waitforlisten 116200 /var/tmp/spdk2.sock 00:06:02.442 05:03:59 -- common/autotest_common.sh@829 -- # '[' -z 116200 ']' 00:06:02.442 05:03:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.442 05:03:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.442 05:03:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.442 05:03:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.442 05:03:59 -- common/autotest_common.sh@10 -- # set +x 00:06:02.442 [2024-11-20 05:03:59.211441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.443 [2024-11-20 05:03:59.211490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116200 ] 00:06:02.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.702 [2024-11-20 05:03:59.285490] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 116083 has claimed it. 00:06:02.702 [2024-11-20 05:03:59.285527] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.271 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (116200) - No such process 00:06:03.271 ERROR: process (pid: 116200) is no longer running 00:06:03.271 05:03:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.271 05:03:59 -- common/autotest_common.sh@862 -- # return 1 00:06:03.271 05:03:59 -- common/autotest_common.sh@653 -- # es=1 00:06:03.271 05:03:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.271 05:03:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.271 05:03:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.271 05:03:59 -- event/cpu_locks.sh@122 -- # locks_exist 116083 00:06:03.271 05:03:59 -- event/cpu_locks.sh@22 -- # lslocks -p 116083 00:06:03.271 05:03:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.271 lslocks: write error 00:06:03.271 05:04:00 -- event/cpu_locks.sh@124 -- # killprocess 116083 00:06:03.271 05:04:00 -- common/autotest_common.sh@936 -- # '[' -z 116083 ']' 00:06:03.271 05:04:00 -- common/autotest_common.sh@940 -- # kill -0 116083 00:06:03.271 05:04:00 -- common/autotest_common.sh@941 -- # uname 00:06:03.271 05:04:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.271 05:04:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116083 00:06:03.530 05:04:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.530 05:04:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.530 05:04:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116083' 00:06:03.530 killing process with pid 116083 00:06:03.530 05:04:00 -- common/autotest_common.sh@955 -- # kill 116083 00:06:03.530 05:04:00 -- common/autotest_common.sh@960 -- # wait 116083 00:06:03.789 00:06:03.790 real 0m2.153s 00:06:03.790 user 0m2.388s 00:06:03.790 sys 0m0.570s 00:06:03.790 05:04:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.790 05:04:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.790 ************************************ 00:06:03.790 END TEST locking_app_on_locked_coremask 00:06:03.790 ************************************ 00:06:03.790 05:04:00 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:03.790 05:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.790 05:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.790 05:04:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.790 ************************************ 00:06:03.790 START TEST locking_overlapped_coremask 00:06:03.790 ************************************ 00:06:03.790 05:04:00 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:03.790 05:04:00 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=116458 00:06:03.790 05:04:00 -- event/cpu_locks.sh@133 -- # waitforlisten 116458 /var/tmp/spdk.sock 00:06:03.790 05:04:00 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:03.790 05:04:00 -- common/autotest_common.sh@829 -- # '[' -z 116458 ']' 00:06:03.790 05:04:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.790 05:04:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.790 05:04:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.790 05:04:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.790 05:04:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.790 [2024-11-20 05:04:00.545921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.790 [2024-11-20 05:04:00.545970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116458 ] 00:06:03.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.790 [2024-11-20 05:04:00.601086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.049 [2024-11-20 05:04:00.678799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.049 [2024-11-20 05:04:00.678935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.049 [2024-11-20 05:04:00.678953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.049 [2024-11-20 05:04:00.678955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.618 05:04:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.618 05:04:01 -- common/autotest_common.sh@862 -- # return 0 00:06:04.618 05:04:01 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=116688 00:06:04.618 05:04:01 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 116688 /var/tmp/spdk2.sock 00:06:04.618 05:04:01 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:04.618 05:04:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:04.618 05:04:01 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 116688 /var/tmp/spdk2.sock 00:06:04.618 05:04:01 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.618 05:04:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.618 05:04:01 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.618 05:04:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.618 05:04:01 -- common/autotest_common.sh@653 -- # waitforlisten 116688 /var/tmp/spdk2.sock 00:06:04.618 05:04:01 -- common/autotest_common.sh@829 -- # '[' -z 116688 ']' 00:06:04.618 05:04:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.618 05:04:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.618 05:04:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.618 05:04:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.618 05:04:01 -- common/autotest_common.sh@10 -- # set +x 00:06:04.618 [2024-11-20 05:04:01.411112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.618 [2024-11-20 05:04:01.411158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116688 ] 00:06:04.618 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.877 [2024-11-20 05:04:01.487128] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 116458 has claimed it. 00:06:04.877 [2024-11-20 05:04:01.487161] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.445 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (116688) - No such process 00:06:05.445 ERROR: process (pid: 116688) is no longer running 00:06:05.445 05:04:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.445 05:04:02 -- common/autotest_common.sh@862 -- # return 1 00:06:05.445 05:04:02 -- common/autotest_common.sh@653 -- # es=1 00:06:05.445 05:04:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.445 05:04:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.445 05:04:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.445 05:04:02 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:05.445 05:04:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.445 05:04:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.445 05:04:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.445 05:04:02 -- event/cpu_locks.sh@141 -- # killprocess 116458 00:06:05.445 05:04:02 -- common/autotest_common.sh@936 -- # '[' -z 116458 ']' 00:06:05.445 05:04:02 -- common/autotest_common.sh@940 -- # kill -0 116458 00:06:05.445 05:04:02 -- common/autotest_common.sh@941 -- # uname 00:06:05.445 05:04:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.445 05:04:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116458 00:06:05.445 05:04:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.445 05:04:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.445 05:04:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116458' 00:06:05.445 killing process with pid 116458 00:06:05.445 05:04:02 -- common/autotest_common.sh@955 -- # kill 116458 00:06:05.445 05:04:02 -- common/autotest_common.sh@960 -- # wait 116458 00:06:05.705 00:06:05.705 real 0m1.932s 00:06:05.705 user 0m5.442s 00:06:05.705 sys 0m0.399s 00:06:05.705 05:04:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.705 05:04:02 -- common/autotest_common.sh@10 -- # set +x 00:06:05.705 ************************************ 00:06:05.705 END TEST locking_overlapped_coremask 00:06:05.705 ************************************ 00:06:05.705 05:04:02 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:05.705 05:04:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.705 05:04:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.705 05:04:02 -- common/autotest_common.sh@10 -- # set +x 00:06:05.705 ************************************ 00:06:05.705 START TEST locking_overlapped_coremask_via_rpc 00:06:05.705 ************************************ 00:06:05.705 05:04:02 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:05.705 05:04:02 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=116854 00:06:05.705 05:04:02 -- event/cpu_locks.sh@149 -- # waitforlisten 116854 /var/tmp/spdk.sock 00:06:05.705 05:04:02 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:05.705 05:04:02 -- common/autotest_common.sh@829 -- # '[' -z 116854 ']' 00:06:05.705 05:04:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.705 05:04:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.705 05:04:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.705 05:04:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.705 05:04:02 -- common/autotest_common.sh@10 -- # set +x 00:06:05.705 [2024-11-20 05:04:02.518218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.705 [2024-11-20 05:04:02.518268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116854 ] 00:06:05.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.964 [2024-11-20 05:04:02.573201] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.964 [2024-11-20 05:04:02.573229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.964 [2024-11-20 05:04:02.650145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.964 [2024-11-20 05:04:02.650280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.964 [2024-11-20 05:04:02.650396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.964 [2024-11-20 05:04:02.650397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.533 05:04:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.533 05:04:03 -- common/autotest_common.sh@862 -- # return 0 00:06:06.533 05:04:03 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=116962 00:06:06.533 05:04:03 -- event/cpu_locks.sh@153 -- # waitforlisten 116962 /var/tmp/spdk2.sock 00:06:06.533 05:04:03 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:06.533 05:04:03 -- common/autotest_common.sh@829 -- # '[' -z 116962 ']' 00:06:06.533 05:04:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.533 05:04:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.533 05:04:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.533 05:04:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.533 05:04:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.792 [2024-11-20 05:04:03.379160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.792 [2024-11-20 05:04:03.379204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116962 ] 00:06:06.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.792 [2024-11-20 05:04:03.455328] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.792 [2024-11-20 05:04:03.455351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.792 [2024-11-20 05:04:03.597888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.792 [2024-11-20 05:04:03.598063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.792 [2024-11-20 05:04:03.598139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.792 [2024-11-20 05:04:03.598140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:07.360 05:04:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.360 05:04:04 -- common/autotest_common.sh@862 -- # return 0 00:06:07.360 05:04:04 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.360 05:04:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.360 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.619 05:04:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.620 05:04:04 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:07.620 05:04:04 -- common/autotest_common.sh@650 -- # local es=0 00:06:07.620 05:04:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:07.620 05:04:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:07.620 05:04:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.620 05:04:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:07.620 05:04:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.620 05:04:04 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:07.620 05:04:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.620 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.620 [2024-11-20 05:04:04.201119] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 116854 has claimed it. 00:06:07.620 request: 00:06:07.620 { 00:06:07.620 "method": "framework_enable_cpumask_locks", 00:06:07.620 "req_id": 1 00:06:07.620 } 00:06:07.620 Got JSON-RPC error response 00:06:07.620 response: 00:06:07.620 { 00:06:07.620 "code": -32603, 00:06:07.620 "message": "Failed to claim CPU core: 2" 00:06:07.620 } 00:06:07.620 05:04:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:07.620 05:04:04 -- common/autotest_common.sh@653 -- # es=1 00:06:07.620 05:04:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.620 05:04:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.620 05:04:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.620 05:04:04 -- event/cpu_locks.sh@158 -- # waitforlisten 116854 /var/tmp/spdk.sock 00:06:07.620 05:04:04 -- common/autotest_common.sh@829 -- # '[' -z 116854 ']' 00:06:07.620 05:04:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.620 05:04:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.620 05:04:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.620 05:04:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.620 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.620 05:04:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.620 05:04:04 -- common/autotest_common.sh@862 -- # return 0 00:06:07.620 05:04:04 -- event/cpu_locks.sh@159 -- # waitforlisten 116962 /var/tmp/spdk2.sock 00:06:07.620 05:04:04 -- common/autotest_common.sh@829 -- # '[' -z 116962 ']' 00:06:07.620 05:04:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.620 05:04:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.620 05:04:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.620 05:04:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.620 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.879 05:04:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.879 05:04:04 -- common/autotest_common.sh@862 -- # return 0 00:06:07.879 05:04:04 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:07.879 05:04:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.879 05:04:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.879 05:04:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.879 00:06:07.879 real 0m2.123s 00:06:07.879 user 0m0.889s 00:06:07.879 sys 0m0.160s 00:06:07.879 05:04:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.879 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.879 ************************************ 00:06:07.879 END TEST locking_overlapped_coremask_via_rpc 00:06:07.879 ************************************ 00:06:07.879 05:04:04 -- event/cpu_locks.sh@174 -- # cleanup 00:06:07.879 05:04:04 -- event/cpu_locks.sh@15 -- # [[ -z 116854 ]] 00:06:07.879 05:04:04 -- event/cpu_locks.sh@15 -- # killprocess 116854 00:06:07.879 05:04:04 -- common/autotest_common.sh@936 -- # '[' -z 116854 ']' 00:06:07.879 05:04:04 -- common/autotest_common.sh@940 -- # kill -0 116854 00:06:07.879 05:04:04 -- common/autotest_common.sh@941 -- # uname 00:06:07.879 05:04:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.879 05:04:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116854 00:06:07.879 05:04:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.879 05:04:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.879 05:04:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116854' 00:06:07.879 killing process with pid 116854 00:06:07.879 05:04:04 -- common/autotest_common.sh@955 -- # kill 116854 00:06:07.879 05:04:04 -- common/autotest_common.sh@960 -- # wait 116854 00:06:08.447 05:04:05 -- event/cpu_locks.sh@16 -- # [[ -z 116962 ]] 00:06:08.447 05:04:05 -- event/cpu_locks.sh@16 -- # killprocess 116962 00:06:08.447 05:04:05 -- common/autotest_common.sh@936 -- # '[' -z 116962 ']' 00:06:08.447 05:04:05 -- common/autotest_common.sh@940 -- # kill -0 116962 00:06:08.447 05:04:05 -- common/autotest_common.sh@941 -- # uname 00:06:08.447 05:04:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.447 05:04:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116962 00:06:08.447 05:04:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:08.447 05:04:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:08.447 05:04:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116962' 00:06:08.447 killing process with pid 116962 00:06:08.447 05:04:05 -- common/autotest_common.sh@955 -- # kill 116962 00:06:08.447 05:04:05 -- common/autotest_common.sh@960 -- # wait 116962 00:06:08.707 05:04:05 -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.707 05:04:05 -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.707 05:04:05 -- event/cpu_locks.sh@15 -- # [[ -z 116854 ]] 00:06:08.707 05:04:05 -- event/cpu_locks.sh@15 -- # killprocess 116854 00:06:08.707 05:04:05 -- common/autotest_common.sh@936 -- # '[' -z 116854 ']' 00:06:08.707 05:04:05 -- common/autotest_common.sh@940 -- # kill -0 116854 00:06:08.707 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (116854) - No such process 00:06:08.707 05:04:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 116854 is not found' 00:06:08.707 Process with pid 116854 is not found 00:06:08.707 05:04:05 -- event/cpu_locks.sh@16 -- # [[ -z 116962 ]] 00:06:08.707 05:04:05 -- event/cpu_locks.sh@16 -- # killprocess 116962 00:06:08.707 05:04:05 -- common/autotest_common.sh@936 -- # '[' -z 116962 ']' 00:06:08.707 05:04:05 -- common/autotest_common.sh@940 -- # kill -0 116962 00:06:08.707 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (116962) - No such process 00:06:08.707 05:04:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 116962 is not found' 00:06:08.707 Process with pid 116962 is not found 00:06:08.707 05:04:05 -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.707 00:06:08.707 real 0m17.014s 00:06:08.707 user 0m29.798s 00:06:08.707 sys 0m4.697s 00:06:08.707 05:04:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.707 05:04:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.707 ************************************ 00:06:08.707 END TEST cpu_locks 00:06:08.707 ************************************ 00:06:08.707 00:06:08.707 real 0m42.678s 00:06:08.707 user 1m22.356s 00:06:08.707 sys 0m8.045s 00:06:08.707 05:04:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.707 05:04:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.707 ************************************ 00:06:08.707 END TEST event 00:06:08.707 ************************************ 00:06:08.707 05:04:05 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:06:08.707 05:04:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.707 05:04:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.707 05:04:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.707 ************************************ 00:06:08.707 START TEST thread 00:06:08.707 ************************************ 00:06:08.707 05:04:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:06:08.967 * Looking for test storage... 00:06:08.967 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread 00:06:08.967 05:04:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:08.967 05:04:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:08.967 05:04:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.967 05:04:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.967 05:04:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.967 05:04:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.967 05:04:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.967 05:04:05 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.967 05:04:05 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.967 05:04:05 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.967 05:04:05 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.967 05:04:05 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.967 05:04:05 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.967 05:04:05 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.967 05:04:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.967 05:04:05 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.967 05:04:05 -- scripts/common.sh@344 -- # : 1 00:06:08.967 05:04:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.967 05:04:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.967 05:04:05 -- scripts/common.sh@364 -- # decimal 1 00:06:08.967 05:04:05 -- scripts/common.sh@352 -- # local d=1 00:06:08.967 05:04:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.967 05:04:05 -- scripts/common.sh@354 -- # echo 1 00:06:08.967 05:04:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.967 05:04:05 -- scripts/common.sh@365 -- # decimal 2 00:06:08.967 05:04:05 -- scripts/common.sh@352 -- # local d=2 00:06:08.967 05:04:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.967 05:04:05 -- scripts/common.sh@354 -- # echo 2 00:06:08.967 05:04:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.967 05:04:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.967 05:04:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.967 05:04:05 -- scripts/common.sh@367 -- # return 0 00:06:08.967 05:04:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.967 05:04:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.967 --rc genhtml_branch_coverage=1 00:06:08.967 --rc genhtml_function_coverage=1 00:06:08.967 --rc genhtml_legend=1 00:06:08.967 --rc geninfo_all_blocks=1 00:06:08.967 --rc geninfo_unexecuted_blocks=1 00:06:08.967 00:06:08.967 ' 00:06:08.967 05:04:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.967 --rc genhtml_branch_coverage=1 00:06:08.967 --rc genhtml_function_coverage=1 00:06:08.967 --rc genhtml_legend=1 00:06:08.967 --rc geninfo_all_blocks=1 00:06:08.967 --rc geninfo_unexecuted_blocks=1 00:06:08.967 00:06:08.967 ' 00:06:08.967 05:04:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.967 --rc genhtml_branch_coverage=1 00:06:08.967 --rc genhtml_function_coverage=1 00:06:08.967 --rc genhtml_legend=1 00:06:08.967 --rc geninfo_all_blocks=1 00:06:08.967 --rc geninfo_unexecuted_blocks=1 00:06:08.967 00:06:08.967 ' 00:06:08.967 05:04:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.967 --rc genhtml_branch_coverage=1 00:06:08.967 --rc genhtml_function_coverage=1 00:06:08.967 --rc genhtml_legend=1 00:06:08.967 --rc geninfo_all_blocks=1 00:06:08.967 --rc geninfo_unexecuted_blocks=1 00:06:08.968 00:06:08.968 ' 00:06:08.968 05:04:05 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.968 05:04:05 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:08.968 05:04:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.968 05:04:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.968 ************************************ 00:06:08.968 START TEST thread_poller_perf 00:06:08.968 ************************************ 00:06:08.968 05:04:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.968 [2024-11-20 05:04:05.690753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.968 [2024-11-20 05:04:05.690829] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117518 ] 00:06:08.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.968 [2024-11-20 05:04:05.751140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.227 [2024-11-20 05:04:05.821009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.227 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:10.164 [2024-11-20T04:04:06.992Z] ====================================== 00:06:10.164 [2024-11-20T04:04:06.992Z] busy:2107296226 (cyc) 00:06:10.164 [2024-11-20T04:04:06.992Z] total_run_count: 396000 00:06:10.164 [2024-11-20T04:04:06.992Z] tsc_hz: 2100000000 (cyc) 00:06:10.164 [2024-11-20T04:04:06.992Z] ====================================== 00:06:10.164 [2024-11-20T04:04:06.992Z] poller_cost: 5321 (cyc), 2533 (nsec) 00:06:10.164 00:06:10.164 real 0m1.249s 00:06:10.164 user 0m1.175s 00:06:10.164 sys 0m0.069s 00:06:10.164 05:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.164 05:04:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.164 ************************************ 00:06:10.164 END TEST thread_poller_perf 00:06:10.164 ************************************ 00:06:10.164 05:04:06 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.164 05:04:06 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:10.164 05:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.164 05:04:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.164 ************************************ 00:06:10.164 START TEST thread_poller_perf 00:06:10.164 ************************************ 00:06:10.164 05:04:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.164 [2024-11-20 05:04:06.979844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.164 [2024-11-20 05:04:06.979920] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117768 ] 00:06:10.423 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.423 [2024-11-20 05:04:07.041019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.423 [2024-11-20 05:04:07.104981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.423 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:11.804 [2024-11-20T04:04:08.632Z] ====================================== 00:06:11.804 [2024-11-20T04:04:08.632Z] busy:2101726216 (cyc) 00:06:11.804 [2024-11-20T04:04:08.632Z] total_run_count: 5517000 00:06:11.804 [2024-11-20T04:04:08.632Z] tsc_hz: 2100000000 (cyc) 00:06:11.804 [2024-11-20T04:04:08.632Z] ====================================== 00:06:11.804 [2024-11-20T04:04:08.632Z] poller_cost: 380 (cyc), 180 (nsec) 00:06:11.804 00:06:11.804 real 0m1.239s 00:06:11.804 user 0m1.168s 00:06:11.804 sys 0m0.067s 00:06:11.804 05:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.804 05:04:08 -- common/autotest_common.sh@10 -- # set +x 00:06:11.804 ************************************ 00:06:11.804 END TEST thread_poller_perf 00:06:11.804 ************************************ 00:06:11.804 05:04:08 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:11.804 00:06:11.804 real 0m2.734s 00:06:11.804 user 0m2.478s 00:06:11.804 sys 0m0.272s 00:06:11.804 05:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.804 05:04:08 -- common/autotest_common.sh@10 -- # set +x 00:06:11.804 ************************************ 00:06:11.804 END TEST thread 00:06:11.804 ************************************ 00:06:11.804 05:04:08 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel.sh 00:06:11.804 05:04:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.804 05:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.804 05:04:08 -- common/autotest_common.sh@10 -- # set +x 00:06:11.804 ************************************ 00:06:11.804 START TEST accel 00:06:11.804 ************************************ 00:06:11.804 05:04:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel.sh 00:06:11.804 * Looking for test storage... 00:06:11.804 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel 00:06:11.804 05:04:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:11.804 05:04:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:11.804 05:04:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:11.804 05:04:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:11.804 05:04:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:11.804 05:04:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:11.804 05:04:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:11.804 05:04:08 -- scripts/common.sh@335 -- # IFS=.-: 00:06:11.804 05:04:08 -- scripts/common.sh@335 -- # read -ra ver1 00:06:11.804 05:04:08 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.804 05:04:08 -- scripts/common.sh@336 -- # read -ra ver2 00:06:11.804 05:04:08 -- scripts/common.sh@337 -- # local 'op=<' 00:06:11.804 05:04:08 -- scripts/common.sh@339 -- # ver1_l=2 00:06:11.804 05:04:08 -- scripts/common.sh@340 -- # ver2_l=1 00:06:11.804 05:04:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:11.804 05:04:08 -- scripts/common.sh@343 -- # case "$op" in 00:06:11.804 05:04:08 -- scripts/common.sh@344 -- # : 1 00:06:11.804 05:04:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:11.804 05:04:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.804 05:04:08 -- scripts/common.sh@364 -- # decimal 1 00:06:11.804 05:04:08 -- scripts/common.sh@352 -- # local d=1 00:06:11.804 05:04:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.804 05:04:08 -- scripts/common.sh@354 -- # echo 1 00:06:11.804 05:04:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:11.804 05:04:08 -- scripts/common.sh@365 -- # decimal 2 00:06:11.804 05:04:08 -- scripts/common.sh@352 -- # local d=2 00:06:11.804 05:04:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.804 05:04:08 -- scripts/common.sh@354 -- # echo 2 00:06:11.804 05:04:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:11.804 05:04:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:11.804 05:04:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:11.804 05:04:08 -- scripts/common.sh@367 -- # return 0 00:06:11.805 05:04:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.805 05:04:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:11.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.805 --rc genhtml_branch_coverage=1 00:06:11.805 --rc genhtml_function_coverage=1 00:06:11.805 --rc genhtml_legend=1 00:06:11.805 --rc geninfo_all_blocks=1 00:06:11.805 --rc geninfo_unexecuted_blocks=1 00:06:11.805 00:06:11.805 ' 00:06:11.805 05:04:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:11.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.805 --rc genhtml_branch_coverage=1 00:06:11.805 --rc genhtml_function_coverage=1 00:06:11.805 --rc genhtml_legend=1 00:06:11.805 --rc geninfo_all_blocks=1 00:06:11.805 --rc geninfo_unexecuted_blocks=1 00:06:11.805 00:06:11.805 ' 00:06:11.805 05:04:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:11.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.805 --rc genhtml_branch_coverage=1 00:06:11.805 --rc genhtml_function_coverage=1 00:06:11.805 --rc genhtml_legend=1 00:06:11.805 --rc geninfo_all_blocks=1 00:06:11.805 --rc geninfo_unexecuted_blocks=1 00:06:11.805 00:06:11.805 ' 00:06:11.805 05:04:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:11.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.805 --rc genhtml_branch_coverage=1 00:06:11.805 --rc genhtml_function_coverage=1 00:06:11.805 --rc genhtml_legend=1 00:06:11.805 --rc geninfo_all_blocks=1 00:06:11.805 --rc geninfo_unexecuted_blocks=1 00:06:11.805 00:06:11.805 ' 00:06:11.805 05:04:08 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:11.805 05:04:08 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:11.805 05:04:08 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.805 05:04:08 -- accel/accel.sh@59 -- # spdk_tgt_pid=118065 00:06:11.805 05:04:08 -- accel/accel.sh@60 -- # waitforlisten 118065 00:06:11.805 05:04:08 -- common/autotest_common.sh@829 -- # '[' -z 118065 ']' 00:06:11.805 05:04:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.805 05:04:08 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:11.805 05:04:08 -- accel/accel.sh@58 -- # build_accel_config 00:06:11.805 05:04:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.805 05:04:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.805 05:04:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.805 05:04:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.805 05:04:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.805 05:04:08 -- common/autotest_common.sh@10 -- # set +x 00:06:11.805 05:04:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.805 05:04:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.805 05:04:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.805 05:04:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.805 05:04:08 -- accel/accel.sh@42 -- # jq -r . 00:06:11.805 [2024-11-20 05:04:08.473572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.805 [2024-11-20 05:04:08.473618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118065 ] 00:06:11.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.805 [2024-11-20 05:04:08.528710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.805 [2024-11-20 05:04:08.596673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.805 [2024-11-20 05:04:08.596806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.743 05:04:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.743 05:04:09 -- common/autotest_common.sh@862 -- # return 0 00:06:12.743 05:04:09 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:12.743 05:04:09 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:12.743 05:04:09 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:12.743 05:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.743 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:12.743 05:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # IFS== 00:06:12.743 05:04:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.743 05:04:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.743 05:04:09 -- accel/accel.sh@67 -- # killprocess 118065 00:06:12.743 05:04:09 -- common/autotest_common.sh@936 -- # '[' -z 118065 ']' 00:06:12.743 05:04:09 -- common/autotest_common.sh@940 -- # kill -0 118065 00:06:12.743 05:04:09 -- common/autotest_common.sh@941 -- # uname 00:06:12.743 05:04:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.743 05:04:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118065 00:06:12.743 05:04:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.743 05:04:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.743 05:04:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118065' 00:06:12.743 killing process with pid 118065 00:06:12.743 05:04:09 -- common/autotest_common.sh@955 -- # kill 118065 00:06:12.743 05:04:09 -- common/autotest_common.sh@960 -- # wait 118065 00:06:13.003 05:04:09 -- accel/accel.sh@68 -- # trap - ERR 00:06:13.003 05:04:09 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:13.003 05:04:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:13.003 05:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.003 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:13.003 05:04:09 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:13.003 05:04:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:13.003 05:04:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.003 05:04:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.003 05:04:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.003 05:04:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.003 05:04:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.003 05:04:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.003 05:04:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.003 05:04:09 -- accel/accel.sh@42 -- # jq -r . 00:06:13.003 05:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.003 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:13.003 05:04:09 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:13.003 05:04:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.003 05:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.003 05:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:13.003 ************************************ 00:06:13.003 START TEST accel_missing_filename 00:06:13.003 ************************************ 00:06:13.003 05:04:09 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:13.003 05:04:09 -- common/autotest_common.sh@650 -- # local es=0 00:06:13.003 05:04:09 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:13.003 05:04:09 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:13.003 05:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.003 05:04:09 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:13.004 05:04:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.004 05:04:09 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:13.004 05:04:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:13.004 05:04:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.004 05:04:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.004 05:04:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.004 05:04:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.004 05:04:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.004 05:04:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.004 05:04:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.004 05:04:09 -- accel/accel.sh@42 -- # jq -r . 00:06:13.004 [2024-11-20 05:04:09.820290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.004 [2024-11-20 05:04:09.820355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118328 ] 00:06:13.263 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.263 [2024-11-20 05:04:09.876876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.263 [2024-11-20 05:04:09.944250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.264 [2024-11-20 05:04:09.984808] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.264 [2024-11-20 05:04:10.045166] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:13.523 A filename is required. 00:06:13.523 05:04:10 -- common/autotest_common.sh@653 -- # es=234 00:06:13.523 05:04:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.523 05:04:10 -- common/autotest_common.sh@662 -- # es=106 00:06:13.523 05:04:10 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:13.523 05:04:10 -- common/autotest_common.sh@670 -- # es=1 00:06:13.523 05:04:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.523 00:06:13.523 real 0m0.349s 00:06:13.523 user 0m0.268s 00:06:13.523 sys 0m0.118s 00:06:13.523 05:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.523 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.523 ************************************ 00:06:13.523 END TEST accel_missing_filename 00:06:13.523 ************************************ 00:06:13.523 05:04:10 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:13.523 05:04:10 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:13.523 05:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.523 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.523 ************************************ 00:06:13.523 START TEST accel_compress_verify 00:06:13.523 ************************************ 00:06:13.523 05:04:10 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:13.523 05:04:10 -- common/autotest_common.sh@650 -- # local es=0 00:06:13.523 05:04:10 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:13.523 05:04:10 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:13.523 05:04:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.523 05:04:10 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:13.523 05:04:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.523 05:04:10 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:13.523 05:04:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:13.523 05:04:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.523 05:04:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.523 05:04:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.523 05:04:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.523 05:04:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.523 05:04:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.523 05:04:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.523 05:04:10 -- accel/accel.sh@42 -- # jq -r . 00:06:13.523 [2024-11-20 05:04:10.205926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.523 [2024-11-20 05:04:10.205986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118360 ] 00:06:13.523 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.523 [2024-11-20 05:04:10.264693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.523 [2024-11-20 05:04:10.332552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.783 [2024-11-20 05:04:10.372496] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.783 [2024-11-20 05:04:10.432818] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:13.783 00:06:13.783 Compression does not support the verify option, aborting. 00:06:13.783 05:04:10 -- common/autotest_common.sh@653 -- # es=161 00:06:13.783 05:04:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.783 05:04:10 -- common/autotest_common.sh@662 -- # es=33 00:06:13.783 05:04:10 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:13.783 05:04:10 -- common/autotest_common.sh@670 -- # es=1 00:06:13.783 05:04:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.783 00:06:13.783 real 0m0.348s 00:06:13.783 user 0m0.270s 00:06:13.783 sys 0m0.114s 00:06:13.783 05:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.783 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.783 ************************************ 00:06:13.783 END TEST accel_compress_verify 00:06:13.783 ************************************ 00:06:13.783 05:04:10 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:13.783 05:04:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.783 05:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.783 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.783 ************************************ 00:06:13.783 START TEST accel_wrong_workload 00:06:13.783 ************************************ 00:06:13.783 05:04:10 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:13.783 05:04:10 -- common/autotest_common.sh@650 -- # local es=0 00:06:13.783 05:04:10 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:13.783 05:04:10 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:13.783 05:04:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.783 05:04:10 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:13.783 05:04:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.783 05:04:10 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:13.783 05:04:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:13.783 05:04:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.783 05:04:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.783 05:04:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.783 05:04:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.783 05:04:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.783 05:04:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.783 05:04:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.783 05:04:10 -- accel/accel.sh@42 -- # jq -r . 00:06:13.783 Unsupported workload type: foobar 00:06:13.783 [2024-11-20 05:04:10.593167] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:13.783 accel_perf options: 00:06:13.783 [-h help message] 00:06:13.783 [-q queue depth per core] 00:06:13.783 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.783 [-T number of threads per core 00:06:13.783 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.783 [-t time in seconds] 00:06:13.783 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.783 [ dif_verify, , dif_generate, dif_generate_copy 00:06:13.783 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.783 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.783 [-S for crc32c workload, use this seed value (default 0) 00:06:13.783 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.783 [-f for fill workload, use this BYTE value (default 255) 00:06:13.783 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.783 [-y verify result if this switch is on] 00:06:13.783 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.783 Can be used to spread operations across a wider range of memory. 00:06:13.783 05:04:10 -- common/autotest_common.sh@653 -- # es=1 00:06:13.783 05:04:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.783 05:04:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.783 05:04:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.783 00:06:13.783 real 0m0.034s 00:06:13.783 user 0m0.019s 00:06:13.783 sys 0m0.014s 00:06:13.783 05:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.783 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.783 ************************************ 00:06:13.783 END TEST accel_wrong_workload 00:06:13.783 ************************************ 00:06:14.043 Error: writing output failed: Broken pipe 00:06:14.043 05:04:10 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.043 05:04:10 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:14.043 05:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.043 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:14.043 ************************************ 00:06:14.043 START TEST accel_negative_buffers 00:06:14.043 ************************************ 00:06:14.043 05:04:10 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.043 05:04:10 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.043 05:04:10 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:14.043 05:04:10 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:14.043 05:04:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.043 05:04:10 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:14.043 05:04:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.043 05:04:10 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:14.043 05:04:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:14.043 05:04:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.043 05:04:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.043 05:04:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.043 05:04:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.043 05:04:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.043 05:04:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.043 05:04:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.043 05:04:10 -- accel/accel.sh@42 -- # jq -r . 00:06:14.044 -x option must be non-negative. 00:06:14.044 [2024-11-20 05:04:10.659708] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:14.044 accel_perf options: 00:06:14.044 [-h help message] 00:06:14.044 [-q queue depth per core] 00:06:14.044 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.044 [-T number of threads per core 00:06:14.044 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.044 [-t time in seconds] 00:06:14.044 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.044 [ dif_verify, , dif_generate, dif_generate_copy 00:06:14.044 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.044 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.044 [-S for crc32c workload, use this seed value (default 0) 00:06:14.044 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.044 [-f for fill workload, use this BYTE value (default 255) 00:06:14.044 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.044 [-y verify result if this switch is on] 00:06:14.044 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.044 Can be used to spread operations across a wider range of memory. 00:06:14.044 05:04:10 -- common/autotest_common.sh@653 -- # es=1 00:06:14.044 05:04:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.044 05:04:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.044 05:04:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.044 00:06:14.044 real 0m0.029s 00:06:14.044 user 0m0.019s 00:06:14.044 sys 0m0.010s 00:06:14.044 05:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.044 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:14.044 ************************************ 00:06:14.044 END TEST accel_negative_buffers 00:06:14.044 ************************************ 00:06:14.044 Error: writing output failed: Broken pipe 00:06:14.044 05:04:10 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:14.044 05:04:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:14.044 05:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.044 05:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:14.044 ************************************ 00:06:14.044 START TEST accel_crc32c 00:06:14.044 ************************************ 00:06:14.044 05:04:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:14.044 05:04:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.044 05:04:10 -- accel/accel.sh@17 -- # local accel_module 00:06:14.044 05:04:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:14.044 05:04:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:14.044 05:04:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.044 05:04:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.044 05:04:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.044 05:04:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.044 05:04:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.044 05:04:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.044 05:04:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.044 05:04:10 -- accel/accel.sh@42 -- # jq -r . 00:06:14.044 [2024-11-20 05:04:10.711503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.044 [2024-11-20 05:04:10.711541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118566 ] 00:06:14.044 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.044 [2024-11-20 05:04:10.765962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.044 [2024-11-20 05:04:10.835595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.423 05:04:12 -- accel/accel.sh@18 -- # out=' 00:06:15.423 SPDK Configuration: 00:06:15.423 Core mask: 0x1 00:06:15.423 00:06:15.423 Accel Perf Configuration: 00:06:15.423 Workload Type: crc32c 00:06:15.423 CRC-32C seed: 32 00:06:15.423 Transfer size: 4096 bytes 00:06:15.423 Vector count 1 00:06:15.423 Module: software 00:06:15.423 Queue depth: 32 00:06:15.423 Allocate depth: 32 00:06:15.423 # threads/core: 1 00:06:15.423 Run time: 1 seconds 00:06:15.423 Verify: Yes 00:06:15.423 00:06:15.423 Running for 1 seconds... 00:06:15.423 00:06:15.423 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.423 ------------------------------------------------------------------------------------ 00:06:15.423 0,0 590624/s 2307 MiB/s 0 0 00:06:15.423 ==================================================================================== 00:06:15.423 Total 590624/s 2307 MiB/s 0 0' 00:06:15.423 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.423 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.423 05:04:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:15.423 05:04:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:15.423 05:04:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.423 05:04:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.423 05:04:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.423 05:04:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.423 05:04:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.423 05:04:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.423 05:04:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.423 05:04:12 -- accel/accel.sh@42 -- # jq -r . 00:06:15.423 [2024-11-20 05:04:12.057120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.424 [2024-11-20 05:04:12.057180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118770 ] 00:06:15.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.424 [2024-11-20 05:04:12.112481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.424 [2024-11-20 05:04:12.184124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val= 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val= 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val=0x1 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val= 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val= 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val=crc32c 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val=32 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val= 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val=software 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val=32 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val=32 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val=1 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val=Yes 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val= 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.424 05:04:12 -- accel/accel.sh@21 -- # val= 00:06:15.424 05:04:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.424 05:04:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.805 05:04:13 -- accel/accel.sh@21 -- # val= 00:06:16.805 05:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # IFS=: 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # read -r var val 00:06:16.805 05:04:13 -- accel/accel.sh@21 -- # val= 00:06:16.805 05:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # IFS=: 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # read -r var val 00:06:16.805 05:04:13 -- accel/accel.sh@21 -- # val= 00:06:16.805 05:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # IFS=: 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # read -r var val 00:06:16.805 05:04:13 -- accel/accel.sh@21 -- # val= 00:06:16.805 05:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # IFS=: 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # read -r var val 00:06:16.805 05:04:13 -- accel/accel.sh@21 -- # val= 00:06:16.805 05:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # IFS=: 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # read -r var val 00:06:16.805 05:04:13 -- accel/accel.sh@21 -- # val= 00:06:16.805 05:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # IFS=: 00:06:16.805 05:04:13 -- accel/accel.sh@20 -- # read -r var val 00:06:16.805 05:04:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.805 05:04:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:16.805 05:04:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.805 00:06:16.805 real 0m2.691s 00:06:16.805 user 0m2.483s 00:06:16.805 sys 0m0.217s 00:06:16.805 05:04:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.805 05:04:13 -- common/autotest_common.sh@10 -- # set +x 00:06:16.805 ************************************ 00:06:16.805 END TEST accel_crc32c 00:06:16.805 ************************************ 00:06:16.805 05:04:13 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:16.805 05:04:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:16.805 05:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.805 05:04:13 -- common/autotest_common.sh@10 -- # set +x 00:06:16.805 ************************************ 00:06:16.805 START TEST accel_crc32c_C2 00:06:16.805 ************************************ 00:06:16.805 05:04:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:16.805 05:04:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.805 05:04:13 -- accel/accel.sh@17 -- # local accel_module 00:06:16.805 05:04:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:16.805 05:04:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:16.805 05:04:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.805 05:04:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.805 05:04:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.806 05:04:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.806 05:04:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.806 05:04:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.806 05:04:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.806 05:04:13 -- accel/accel.sh@42 -- # jq -r . 00:06:16.806 [2024-11-20 05:04:13.451665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.806 [2024-11-20 05:04:13.451743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119036 ] 00:06:16.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.806 [2024-11-20 05:04:13.508072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.806 [2024-11-20 05:04:13.575927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.187 05:04:14 -- accel/accel.sh@18 -- # out=' 00:06:18.187 SPDK Configuration: 00:06:18.187 Core mask: 0x1 00:06:18.187 00:06:18.187 Accel Perf Configuration: 00:06:18.187 Workload Type: crc32c 00:06:18.187 CRC-32C seed: 0 00:06:18.187 Transfer size: 4096 bytes 00:06:18.187 Vector count 2 00:06:18.187 Module: software 00:06:18.187 Queue depth: 32 00:06:18.187 Allocate depth: 32 00:06:18.187 # threads/core: 1 00:06:18.187 Run time: 1 seconds 00:06:18.187 Verify: Yes 00:06:18.187 00:06:18.187 Running for 1 seconds... 00:06:18.187 00:06:18.187 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.187 ------------------------------------------------------------------------------------ 00:06:18.187 0,0 462848/s 3616 MiB/s 0 0 00:06:18.187 ==================================================================================== 00:06:18.187 Total 462848/s 1808 MiB/s 0 0' 00:06:18.187 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.187 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.187 05:04:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:18.187 05:04:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:18.187 05:04:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.187 05:04:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.187 05:04:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.187 05:04:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.187 05:04:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.187 05:04:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.187 05:04:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.188 05:04:14 -- accel/accel.sh@42 -- # jq -r . 00:06:18.188 [2024-11-20 05:04:14.798686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.188 [2024-11-20 05:04:14.798746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119258 ] 00:06:18.188 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.188 [2024-11-20 05:04:14.855072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.188 [2024-11-20 05:04:14.922293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val= 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val= 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val=0x1 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val= 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val= 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val=crc32c 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val=0 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val= 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val=software 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val=32 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val=32 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val=1 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val=Yes 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val= 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.188 05:04:14 -- accel/accel.sh@21 -- # val= 00:06:18.188 05:04:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.188 05:04:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.577 05:04:16 -- accel/accel.sh@21 -- # val= 00:06:19.577 05:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.577 05:04:16 -- accel/accel.sh@21 -- # val= 00:06:19.577 05:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.577 05:04:16 -- accel/accel.sh@21 -- # val= 00:06:19.577 05:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.577 05:04:16 -- accel/accel.sh@21 -- # val= 00:06:19.577 05:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.577 05:04:16 -- accel/accel.sh@21 -- # val= 00:06:19.577 05:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.577 05:04:16 -- accel/accel.sh@21 -- # val= 00:06:19.577 05:04:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.577 05:04:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.577 05:04:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.577 05:04:16 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:19.577 05:04:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.577 00:06:19.577 real 0m2.699s 00:06:19.577 user 0m2.482s 00:06:19.577 sys 0m0.227s 00:06:19.577 05:04:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.577 05:04:16 -- common/autotest_common.sh@10 -- # set +x 00:06:19.577 ************************************ 00:06:19.577 END TEST accel_crc32c_C2 00:06:19.577 ************************************ 00:06:19.577 05:04:16 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:19.577 05:04:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.577 05:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.577 05:04:16 -- common/autotest_common.sh@10 -- # set +x 00:06:19.577 ************************************ 00:06:19.577 START TEST accel_copy 00:06:19.577 ************************************ 00:06:19.577 05:04:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:19.577 05:04:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.577 05:04:16 -- accel/accel.sh@17 -- # local accel_module 00:06:19.577 05:04:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:19.577 05:04:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:19.577 05:04:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.577 05:04:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.577 05:04:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.577 05:04:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.577 05:04:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.577 05:04:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.577 05:04:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.577 05:04:16 -- accel/accel.sh@42 -- # jq -r . 00:06:19.577 [2024-11-20 05:04:16.191494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.577 [2024-11-20 05:04:16.191577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119516 ] 00:06:19.577 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.577 [2024-11-20 05:04:16.248947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.577 [2024-11-20 05:04:16.317200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.958 05:04:17 -- accel/accel.sh@18 -- # out=' 00:06:20.958 SPDK Configuration: 00:06:20.958 Core mask: 0x1 00:06:20.958 00:06:20.958 Accel Perf Configuration: 00:06:20.958 Workload Type: copy 00:06:20.958 Transfer size: 4096 bytes 00:06:20.958 Vector count 1 00:06:20.958 Module: software 00:06:20.958 Queue depth: 32 00:06:20.958 Allocate depth: 32 00:06:20.958 # threads/core: 1 00:06:20.958 Run time: 1 seconds 00:06:20.958 Verify: Yes 00:06:20.958 00:06:20.958 Running for 1 seconds... 00:06:20.958 00:06:20.958 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.958 ------------------------------------------------------------------------------------ 00:06:20.958 0,0 438496/s 1712 MiB/s 0 0 00:06:20.958 ==================================================================================== 00:06:20.958 Total 438496/s 1712 MiB/s 0 0' 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:20.958 05:04:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:20.958 05:04:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.958 05:04:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.958 05:04:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.958 05:04:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.958 05:04:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.958 05:04:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.958 05:04:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.958 05:04:17 -- accel/accel.sh@42 -- # jq -r . 00:06:20.958 [2024-11-20 05:04:17.538825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.958 [2024-11-20 05:04:17.538885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119735 ] 00:06:20.958 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.958 [2024-11-20 05:04:17.594529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.958 [2024-11-20 05:04:17.664527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val= 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val= 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val=0x1 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val= 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val= 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val=copy 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val= 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val=software 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val=32 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val=32 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val=1 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val=Yes 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val= 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.958 05:04:17 -- accel/accel.sh@21 -- # val= 00:06:20.958 05:04:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.958 05:04:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.339 05:04:18 -- accel/accel.sh@21 -- # val= 00:06:22.339 05:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.339 05:04:18 -- accel/accel.sh@20 -- # IFS=: 00:06:22.339 05:04:18 -- accel/accel.sh@20 -- # read -r var val 00:06:22.339 05:04:18 -- accel/accel.sh@21 -- # val= 00:06:22.339 05:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.339 05:04:18 -- accel/accel.sh@20 -- # IFS=: 00:06:22.339 05:04:18 -- accel/accel.sh@20 -- # read -r var val 00:06:22.339 05:04:18 -- accel/accel.sh@21 -- # val= 00:06:22.339 05:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.339 05:04:18 -- accel/accel.sh@20 -- # IFS=: 00:06:22.339 05:04:18 -- accel/accel.sh@20 -- # read -r var val 00:06:22.339 05:04:18 -- accel/accel.sh@21 -- # val= 00:06:22.339 05:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.339 05:04:18 -- accel/accel.sh@20 -- # IFS=: 00:06:22.339 05:04:18 -- accel/accel.sh@20 -- # read -r var val 00:06:22.339 05:04:18 -- accel/accel.sh@21 -- # val= 00:06:22.339 05:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.340 05:04:18 -- accel/accel.sh@20 -- # IFS=: 00:06:22.340 05:04:18 -- accel/accel.sh@20 -- # read -r var val 00:06:22.340 05:04:18 -- accel/accel.sh@21 -- # val= 00:06:22.340 05:04:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.340 05:04:18 -- accel/accel.sh@20 -- # IFS=: 00:06:22.340 05:04:18 -- accel/accel.sh@20 -- # read -r var val 00:06:22.340 05:04:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.340 05:04:18 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:22.340 05:04:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.340 00:06:22.340 real 0m2.706s 00:06:22.340 user 0m2.479s 00:06:22.340 sys 0m0.235s 00:06:22.340 05:04:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.340 05:04:18 -- common/autotest_common.sh@10 -- # set +x 00:06:22.340 ************************************ 00:06:22.340 END TEST accel_copy 00:06:22.340 ************************************ 00:06:22.340 05:04:18 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.340 05:04:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:22.340 05:04:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.340 05:04:18 -- common/autotest_common.sh@10 -- # set +x 00:06:22.340 ************************************ 00:06:22.340 START TEST accel_fill 00:06:22.340 ************************************ 00:06:22.340 05:04:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.340 05:04:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.340 05:04:18 -- accel/accel.sh@17 -- # local accel_module 00:06:22.340 05:04:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.340 05:04:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.340 05:04:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.340 05:04:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.340 05:04:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.340 05:04:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.340 05:04:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.340 05:04:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.340 05:04:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.340 05:04:18 -- accel/accel.sh@42 -- # jq -r . 00:06:22.340 [2024-11-20 05:04:18.933689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.340 [2024-11-20 05:04:18.933764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120025 ] 00:06:22.340 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.340 [2024-11-20 05:04:18.992166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.340 [2024-11-20 05:04:19.061929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.719 05:04:20 -- accel/accel.sh@18 -- # out=' 00:06:23.719 SPDK Configuration: 00:06:23.719 Core mask: 0x1 00:06:23.719 00:06:23.719 Accel Perf Configuration: 00:06:23.719 Workload Type: fill 00:06:23.719 Fill pattern: 0x80 00:06:23.719 Transfer size: 4096 bytes 00:06:23.719 Vector count 1 00:06:23.719 Module: software 00:06:23.719 Queue depth: 64 00:06:23.719 Allocate depth: 64 00:06:23.719 # threads/core: 1 00:06:23.719 Run time: 1 seconds 00:06:23.719 Verify: Yes 00:06:23.719 00:06:23.719 Running for 1 seconds... 00:06:23.719 00:06:23.719 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.719 ------------------------------------------------------------------------------------ 00:06:23.719 0,0 675392/s 2638 MiB/s 0 0 00:06:23.719 ==================================================================================== 00:06:23.719 Total 675392/s 2638 MiB/s 0 0' 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.719 05:04:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.719 05:04:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.719 05:04:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.719 05:04:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.719 05:04:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.719 05:04:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.719 05:04:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.719 05:04:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.719 05:04:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.719 05:04:20 -- accel/accel.sh@42 -- # jq -r . 00:06:23.719 [2024-11-20 05:04:20.277111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.719 [2024-11-20 05:04:20.277162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120262 ] 00:06:23.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.719 [2024-11-20 05:04:20.332609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.719 [2024-11-20 05:04:20.403449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.719 05:04:20 -- accel/accel.sh@21 -- # val= 00:06:23.719 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.719 05:04:20 -- accel/accel.sh@21 -- # val= 00:06:23.719 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.719 05:04:20 -- accel/accel.sh@21 -- # val=0x1 00:06:23.719 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.719 05:04:20 -- accel/accel.sh@21 -- # val= 00:06:23.719 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.719 05:04:20 -- accel/accel.sh@21 -- # val= 00:06:23.719 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.719 05:04:20 -- accel/accel.sh@21 -- # val=fill 00:06:23.719 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.719 05:04:20 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.719 05:04:20 -- accel/accel.sh@21 -- # val=0x80 00:06:23.719 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.719 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val= 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val=software 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val=64 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val=64 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val=1 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val=Yes 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val= 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:23.720 05:04:20 -- accel/accel.sh@21 -- # val= 00:06:23.720 05:04:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # IFS=: 00:06:23.720 05:04:20 -- accel/accel.sh@20 -- # read -r var val 00:06:25.101 05:04:21 -- accel/accel.sh@21 -- # val= 00:06:25.101 05:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # IFS=: 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # read -r var val 00:06:25.101 05:04:21 -- accel/accel.sh@21 -- # val= 00:06:25.101 05:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # IFS=: 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # read -r var val 00:06:25.101 05:04:21 -- accel/accel.sh@21 -- # val= 00:06:25.101 05:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # IFS=: 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # read -r var val 00:06:25.101 05:04:21 -- accel/accel.sh@21 -- # val= 00:06:25.101 05:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # IFS=: 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # read -r var val 00:06:25.101 05:04:21 -- accel/accel.sh@21 -- # val= 00:06:25.101 05:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # IFS=: 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # read -r var val 00:06:25.101 05:04:21 -- accel/accel.sh@21 -- # val= 00:06:25.101 05:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # IFS=: 00:06:25.101 05:04:21 -- accel/accel.sh@20 -- # read -r var val 00:06:25.101 05:04:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.101 05:04:21 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:25.101 05:04:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.101 00:06:25.101 real 0m2.698s 00:06:25.101 user 0m2.476s 00:06:25.101 sys 0m0.223s 00:06:25.101 05:04:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.101 05:04:21 -- common/autotest_common.sh@10 -- # set +x 00:06:25.101 ************************************ 00:06:25.101 END TEST accel_fill 00:06:25.101 ************************************ 00:06:25.101 05:04:21 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:25.101 05:04:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:25.101 05:04:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.101 05:04:21 -- common/autotest_common.sh@10 -- # set +x 00:06:25.101 ************************************ 00:06:25.101 START TEST accel_copy_crc32c 00:06:25.101 ************************************ 00:06:25.101 05:04:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:25.101 05:04:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.101 05:04:21 -- accel/accel.sh@17 -- # local accel_module 00:06:25.101 05:04:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:25.101 05:04:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:25.101 05:04:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.101 05:04:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.102 05:04:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.102 05:04:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.102 05:04:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.102 05:04:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.102 05:04:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.102 05:04:21 -- accel/accel.sh@42 -- # jq -r . 00:06:25.102 [2024-11-20 05:04:21.662383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.102 [2024-11-20 05:04:21.662449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120525 ] 00:06:25.102 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.102 [2024-11-20 05:04:21.717794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.102 [2024-11-20 05:04:21.785622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.482 05:04:22 -- accel/accel.sh@18 -- # out=' 00:06:26.482 SPDK Configuration: 00:06:26.482 Core mask: 0x1 00:06:26.482 00:06:26.482 Accel Perf Configuration: 00:06:26.482 Workload Type: copy_crc32c 00:06:26.482 CRC-32C seed: 0 00:06:26.482 Vector size: 4096 bytes 00:06:26.482 Transfer size: 4096 bytes 00:06:26.482 Vector count 1 00:06:26.482 Module: software 00:06:26.482 Queue depth: 32 00:06:26.482 Allocate depth: 32 00:06:26.482 # threads/core: 1 00:06:26.482 Run time: 1 seconds 00:06:26.482 Verify: Yes 00:06:26.482 00:06:26.482 Running for 1 seconds... 00:06:26.482 00:06:26.482 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.482 ------------------------------------------------------------------------------------ 00:06:26.482 0,0 337184/s 1317 MiB/s 0 0 00:06:26.482 ==================================================================================== 00:06:26.482 Total 337184/s 1317 MiB/s 0 0' 00:06:26.482 05:04:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.482 05:04:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:26.482 05:04:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.482 05:04:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:26.482 05:04:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.482 05:04:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.482 05:04:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.482 05:04:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.482 05:04:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.482 05:04:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.482 05:04:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.482 05:04:22 -- accel/accel.sh@42 -- # jq -r . 00:06:26.482 [2024-11-20 05:04:22.993006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.482 [2024-11-20 05:04:22.993062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120739 ] 00:06:26.482 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.482 [2024-11-20 05:04:23.047572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.482 [2024-11-20 05:04:23.114527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.482 05:04:23 -- accel/accel.sh@21 -- # val= 00:06:26.482 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.482 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.482 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.482 05:04:23 -- accel/accel.sh@21 -- # val= 00:06:26.482 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.482 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.482 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.482 05:04:23 -- accel/accel.sh@21 -- # val=0x1 00:06:26.482 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val= 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val= 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val=0 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val= 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val=software 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val=32 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val=32 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val=1 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val=Yes 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val= 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:26.483 05:04:23 -- accel/accel.sh@21 -- # val= 00:06:26.483 05:04:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # IFS=: 00:06:26.483 05:04:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.864 05:04:24 -- accel/accel.sh@21 -- # val= 00:06:27.864 05:04:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # IFS=: 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # read -r var val 00:06:27.864 05:04:24 -- accel/accel.sh@21 -- # val= 00:06:27.864 05:04:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # IFS=: 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # read -r var val 00:06:27.864 05:04:24 -- accel/accel.sh@21 -- # val= 00:06:27.864 05:04:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # IFS=: 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # read -r var val 00:06:27.864 05:04:24 -- accel/accel.sh@21 -- # val= 00:06:27.864 05:04:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # IFS=: 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # read -r var val 00:06:27.864 05:04:24 -- accel/accel.sh@21 -- # val= 00:06:27.864 05:04:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # IFS=: 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # read -r var val 00:06:27.864 05:04:24 -- accel/accel.sh@21 -- # val= 00:06:27.864 05:04:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # IFS=: 00:06:27.864 05:04:24 -- accel/accel.sh@20 -- # read -r var val 00:06:27.864 05:04:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.864 05:04:24 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:27.864 05:04:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.864 00:06:27.864 real 0m2.675s 00:06:27.864 user 0m2.458s 00:06:27.864 sys 0m0.218s 00:06:27.864 05:04:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.864 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.864 ************************************ 00:06:27.864 END TEST accel_copy_crc32c 00:06:27.864 ************************************ 00:06:27.864 05:04:24 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.864 05:04:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:27.864 05:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.864 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.864 ************************************ 00:06:27.864 START TEST accel_copy_crc32c_C2 00:06:27.864 ************************************ 00:06:27.864 05:04:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.864 05:04:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.864 05:04:24 -- accel/accel.sh@17 -- # local accel_module 00:06:27.864 05:04:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:27.864 05:04:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:27.864 05:04:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.864 05:04:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.864 05:04:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.864 05:04:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.864 05:04:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.864 05:04:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.864 05:04:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.864 05:04:24 -- accel/accel.sh@42 -- # jq -r . 00:06:27.864 [2024-11-20 05:04:24.368028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.864 [2024-11-20 05:04:24.368090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120992 ] 00:06:27.864 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.864 [2024-11-20 05:04:24.423773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.864 [2024-11-20 05:04:24.491726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.247 05:04:25 -- accel/accel.sh@18 -- # out=' 00:06:29.247 SPDK Configuration: 00:06:29.247 Core mask: 0x1 00:06:29.247 00:06:29.247 Accel Perf Configuration: 00:06:29.247 Workload Type: copy_crc32c 00:06:29.247 CRC-32C seed: 0 00:06:29.247 Vector size: 4096 bytes 00:06:29.247 Transfer size: 8192 bytes 00:06:29.247 Vector count 2 00:06:29.247 Module: software 00:06:29.247 Queue depth: 32 00:06:29.247 Allocate depth: 32 00:06:29.247 # threads/core: 1 00:06:29.247 Run time: 1 seconds 00:06:29.247 Verify: Yes 00:06:29.247 00:06:29.247 Running for 1 seconds... 00:06:29.247 00:06:29.247 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.247 ------------------------------------------------------------------------------------ 00:06:29.247 0,0 244128/s 1907 MiB/s 0 0 00:06:29.247 ==================================================================================== 00:06:29.247 Total 244128/s 953 MiB/s 0 0' 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:29.247 05:04:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.247 05:04:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.247 05:04:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:29.247 05:04:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.247 05:04:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.247 05:04:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.247 05:04:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.247 05:04:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.247 05:04:25 -- accel/accel.sh@42 -- # jq -r . 00:06:29.247 [2024-11-20 05:04:25.712748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.247 [2024-11-20 05:04:25.712806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121211 ] 00:06:29.247 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.247 [2024-11-20 05:04:25.767617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.247 [2024-11-20 05:04:25.835907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val= 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val= 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val=0x1 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val= 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val= 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val=0 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val= 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val=software 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val=32 00:06:29.247 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.247 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.247 05:04:25 -- accel/accel.sh@21 -- # val=32 00:06:29.248 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.248 05:04:25 -- accel/accel.sh@21 -- # val=1 00:06:29.248 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.248 05:04:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.248 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.248 05:04:25 -- accel/accel.sh@21 -- # val=Yes 00:06:29.248 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.248 05:04:25 -- accel/accel.sh@21 -- # val= 00:06:29.248 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.248 05:04:25 -- accel/accel.sh@21 -- # val= 00:06:29.248 05:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.248 05:04:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.630 05:04:27 -- accel/accel.sh@21 -- # val= 00:06:30.630 05:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # IFS=: 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # read -r var val 00:06:30.630 05:04:27 -- accel/accel.sh@21 -- # val= 00:06:30.630 05:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # IFS=: 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # read -r var val 00:06:30.630 05:04:27 -- accel/accel.sh@21 -- # val= 00:06:30.630 05:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # IFS=: 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # read -r var val 00:06:30.630 05:04:27 -- accel/accel.sh@21 -- # val= 00:06:30.630 05:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # IFS=: 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # read -r var val 00:06:30.630 05:04:27 -- accel/accel.sh@21 -- # val= 00:06:30.630 05:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # IFS=: 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # read -r var val 00:06:30.630 05:04:27 -- accel/accel.sh@21 -- # val= 00:06:30.630 05:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # IFS=: 00:06:30.630 05:04:27 -- accel/accel.sh@20 -- # read -r var val 00:06:30.630 05:04:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.630 05:04:27 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:30.630 05:04:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.630 00:06:30.630 real 0m2.692s 00:06:30.630 user 0m2.467s 00:06:30.630 sys 0m0.224s 00:06:30.630 05:04:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.630 05:04:27 -- common/autotest_common.sh@10 -- # set +x 00:06:30.630 ************************************ 00:06:30.630 END TEST accel_copy_crc32c_C2 00:06:30.630 ************************************ 00:06:30.630 05:04:27 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:30.630 05:04:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:30.630 05:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.630 05:04:27 -- common/autotest_common.sh@10 -- # set +x 00:06:30.630 ************************************ 00:06:30.630 START TEST accel_dualcast 00:06:30.630 ************************************ 00:06:30.630 05:04:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:30.630 05:04:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.630 05:04:27 -- accel/accel.sh@17 -- # local accel_module 00:06:30.630 05:04:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:30.630 05:04:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:30.630 05:04:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.630 05:04:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.630 05:04:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.630 05:04:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.630 05:04:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.630 05:04:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.630 05:04:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.630 05:04:27 -- accel/accel.sh@42 -- # jq -r . 00:06:30.630 [2024-11-20 05:04:27.086225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.630 [2024-11-20 05:04:27.086284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121450 ] 00:06:30.630 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.630 [2024-11-20 05:04:27.141770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.630 [2024-11-20 05:04:27.211145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.012 05:04:28 -- accel/accel.sh@18 -- # out=' 00:06:32.012 SPDK Configuration: 00:06:32.012 Core mask: 0x1 00:06:32.012 00:06:32.012 Accel Perf Configuration: 00:06:32.012 Workload Type: dualcast 00:06:32.012 Transfer size: 4096 bytes 00:06:32.012 Vector count 1 00:06:32.012 Module: software 00:06:32.012 Queue depth: 32 00:06:32.012 Allocate depth: 32 00:06:32.012 # threads/core: 1 00:06:32.012 Run time: 1 seconds 00:06:32.012 Verify: Yes 00:06:32.012 00:06:32.012 Running for 1 seconds... 00:06:32.012 00:06:32.012 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.012 ------------------------------------------------------------------------------------ 00:06:32.012 0,0 519424/s 2029 MiB/s 0 0 00:06:32.012 ==================================================================================== 00:06:32.012 Total 519424/s 2029 MiB/s 0 0' 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.012 05:04:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:32.012 05:04:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:32.012 05:04:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.012 05:04:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.012 05:04:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.012 05:04:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.012 05:04:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.012 05:04:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.012 05:04:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.012 05:04:28 -- accel/accel.sh@42 -- # jq -r . 00:06:32.012 [2024-11-20 05:04:28.436374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.012 [2024-11-20 05:04:28.436453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121671 ] 00:06:32.012 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.012 [2024-11-20 05:04:28.493240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.012 [2024-11-20 05:04:28.560654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.012 05:04:28 -- accel/accel.sh@21 -- # val= 00:06:32.012 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.012 05:04:28 -- accel/accel.sh@21 -- # val= 00:06:32.012 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.012 05:04:28 -- accel/accel.sh@21 -- # val=0x1 00:06:32.012 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.012 05:04:28 -- accel/accel.sh@21 -- # val= 00:06:32.012 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.012 05:04:28 -- accel/accel.sh@21 -- # val= 00:06:32.012 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.012 05:04:28 -- accel/accel.sh@21 -- # val=dualcast 00:06:32.012 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.012 05:04:28 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.012 05:04:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.012 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.012 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val= 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val=software 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val=32 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val=32 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val=1 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val=Yes 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val= 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.013 05:04:28 -- accel/accel.sh@21 -- # val= 00:06:32.013 05:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # IFS=: 00:06:32.013 05:04:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.951 05:04:29 -- accel/accel.sh@21 -- # val= 00:06:32.951 05:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.951 05:04:29 -- accel/accel.sh@21 -- # val= 00:06:32.951 05:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.951 05:04:29 -- accel/accel.sh@21 -- # val= 00:06:32.951 05:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.951 05:04:29 -- accel/accel.sh@21 -- # val= 00:06:32.951 05:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.951 05:04:29 -- accel/accel.sh@21 -- # val= 00:06:32.951 05:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.951 05:04:29 -- accel/accel.sh@21 -- # val= 00:06:32.951 05:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.951 05:04:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.951 05:04:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.951 05:04:29 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:32.951 05:04:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.951 00:06:32.951 real 0m2.697s 00:06:32.951 user 0m2.478s 00:06:32.951 sys 0m0.218s 00:06:32.951 05:04:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.951 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.951 ************************************ 00:06:32.951 END TEST accel_dualcast 00:06:32.951 ************************************ 00:06:33.211 05:04:29 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:33.211 05:04:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:33.211 05:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.211 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:33.211 ************************************ 00:06:33.211 START TEST accel_compare 00:06:33.211 ************************************ 00:06:33.211 05:04:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:33.211 05:04:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.211 05:04:29 -- accel/accel.sh@17 -- # local accel_module 00:06:33.211 05:04:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:33.211 05:04:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:33.211 05:04:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.211 05:04:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.211 05:04:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.211 05:04:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.211 05:04:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.211 05:04:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.211 05:04:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.211 05:04:29 -- accel/accel.sh@42 -- # jq -r . 00:06:33.211 [2024-11-20 05:04:29.818089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.211 [2024-11-20 05:04:29.818162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121921 ] 00:06:33.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.211 [2024-11-20 05:04:29.875444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.211 [2024-11-20 05:04:29.945624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.591 05:04:31 -- accel/accel.sh@18 -- # out=' 00:06:34.591 SPDK Configuration: 00:06:34.591 Core mask: 0x1 00:06:34.591 00:06:34.591 Accel Perf Configuration: 00:06:34.591 Workload Type: compare 00:06:34.591 Transfer size: 4096 bytes 00:06:34.591 Vector count 1 00:06:34.591 Module: software 00:06:34.591 Queue depth: 32 00:06:34.591 Allocate depth: 32 00:06:34.591 # threads/core: 1 00:06:34.591 Run time: 1 seconds 00:06:34.591 Verify: Yes 00:06:34.591 00:06:34.591 Running for 1 seconds... 00:06:34.591 00:06:34.592 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.592 ------------------------------------------------------------------------------------ 00:06:34.592 0,0 621280/s 2426 MiB/s 0 0 00:06:34.592 ==================================================================================== 00:06:34.592 Total 621280/s 2426 MiB/s 0 0' 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:34.592 05:04:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.592 05:04:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.592 05:04:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.592 05:04:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.592 05:04:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.592 05:04:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.592 05:04:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.592 05:04:31 -- accel/accel.sh@42 -- # jq -r . 00:06:34.592 [2024-11-20 05:04:31.151961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.592 [2024-11-20 05:04:31.152012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122146 ] 00:06:34.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.592 [2024-11-20 05:04:31.206542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.592 [2024-11-20 05:04:31.272398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val= 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val= 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val=0x1 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val= 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val= 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val=compare 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val= 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val=software 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val=32 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val=32 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val=1 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val=Yes 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val= 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.592 05:04:31 -- accel/accel.sh@21 -- # val= 00:06:34.592 05:04:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.592 05:04:31 -- accel/accel.sh@20 -- # read -r var val 00:06:35.971 05:04:32 -- accel/accel.sh@21 -- # val= 00:06:35.971 05:04:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # IFS=: 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # read -r var val 00:06:35.971 05:04:32 -- accel/accel.sh@21 -- # val= 00:06:35.971 05:04:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # IFS=: 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # read -r var val 00:06:35.971 05:04:32 -- accel/accel.sh@21 -- # val= 00:06:35.971 05:04:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # IFS=: 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # read -r var val 00:06:35.971 05:04:32 -- accel/accel.sh@21 -- # val= 00:06:35.971 05:04:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # IFS=: 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # read -r var val 00:06:35.971 05:04:32 -- accel/accel.sh@21 -- # val= 00:06:35.971 05:04:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # IFS=: 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # read -r var val 00:06:35.971 05:04:32 -- accel/accel.sh@21 -- # val= 00:06:35.971 05:04:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # IFS=: 00:06:35.971 05:04:32 -- accel/accel.sh@20 -- # read -r var val 00:06:35.971 05:04:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.971 05:04:32 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:35.971 05:04:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.971 00:06:35.971 real 0m2.678s 00:06:35.971 user 0m2.460s 00:06:35.971 sys 0m0.217s 00:06:35.971 05:04:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.971 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.971 ************************************ 00:06:35.971 END TEST accel_compare 00:06:35.971 ************************************ 00:06:35.971 05:04:32 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:35.971 05:04:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:35.971 05:04:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.971 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.971 ************************************ 00:06:35.971 START TEST accel_xor 00:06:35.971 ************************************ 00:06:35.971 05:04:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:35.971 05:04:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.971 05:04:32 -- accel/accel.sh@17 -- # local accel_module 00:06:35.971 05:04:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:35.971 05:04:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:35.971 05:04:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.971 05:04:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.971 05:04:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.971 05:04:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.971 05:04:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.971 05:04:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.971 05:04:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.971 05:04:32 -- accel/accel.sh@42 -- # jq -r . 00:06:35.971 [2024-11-20 05:04:32.526541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.971 [2024-11-20 05:04:32.526619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122384 ] 00:06:35.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.971 [2024-11-20 05:04:32.582892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.971 [2024-11-20 05:04:32.660976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.351 05:04:33 -- accel/accel.sh@18 -- # out=' 00:06:37.351 SPDK Configuration: 00:06:37.351 Core mask: 0x1 00:06:37.351 00:06:37.351 Accel Perf Configuration: 00:06:37.351 Workload Type: xor 00:06:37.351 Source buffers: 2 00:06:37.351 Transfer size: 4096 bytes 00:06:37.351 Vector count 1 00:06:37.351 Module: software 00:06:37.351 Queue depth: 32 00:06:37.351 Allocate depth: 32 00:06:37.351 # threads/core: 1 00:06:37.351 Run time: 1 seconds 00:06:37.351 Verify: Yes 00:06:37.351 00:06:37.351 Running for 1 seconds... 00:06:37.351 00:06:37.351 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.351 ------------------------------------------------------------------------------------ 00:06:37.351 0,0 506880/s 1980 MiB/s 0 0 00:06:37.351 ==================================================================================== 00:06:37.351 Total 506880/s 1980 MiB/s 0 0' 00:06:37.351 05:04:33 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:37.351 05:04:33 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:37.351 05:04:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.351 05:04:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.351 05:04:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.351 05:04:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.351 05:04:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.351 05:04:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.351 05:04:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.351 05:04:33 -- accel/accel.sh@42 -- # jq -r . 00:06:37.351 [2024-11-20 05:04:33.867002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.351 [2024-11-20 05:04:33.867057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122625 ] 00:06:37.351 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.351 [2024-11-20 05:04:33.920625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.351 [2024-11-20 05:04:33.986731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val= 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val= 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val=0x1 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val= 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val= 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val=xor 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val=2 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val= 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val=software 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.351 05:04:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.351 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.351 05:04:34 -- accel/accel.sh@21 -- # val=32 00:06:37.351 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.352 05:04:34 -- accel/accel.sh@21 -- # val=32 00:06:37.352 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.352 05:04:34 -- accel/accel.sh@21 -- # val=1 00:06:37.352 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.352 05:04:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.352 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.352 05:04:34 -- accel/accel.sh@21 -- # val=Yes 00:06:37.352 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.352 05:04:34 -- accel/accel.sh@21 -- # val= 00:06:37.352 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.352 05:04:34 -- accel/accel.sh@21 -- # val= 00:06:37.352 05:04:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.352 05:04:34 -- accel/accel.sh@20 -- # read -r var val 00:06:38.732 05:04:35 -- accel/accel.sh@21 -- # val= 00:06:38.732 05:04:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.732 05:04:35 -- accel/accel.sh@20 -- # IFS=: 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # read -r var val 00:06:38.733 05:04:35 -- accel/accel.sh@21 -- # val= 00:06:38.733 05:04:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # IFS=: 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # read -r var val 00:06:38.733 05:04:35 -- accel/accel.sh@21 -- # val= 00:06:38.733 05:04:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # IFS=: 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # read -r var val 00:06:38.733 05:04:35 -- accel/accel.sh@21 -- # val= 00:06:38.733 05:04:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # IFS=: 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # read -r var val 00:06:38.733 05:04:35 -- accel/accel.sh@21 -- # val= 00:06:38.733 05:04:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # IFS=: 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # read -r var val 00:06:38.733 05:04:35 -- accel/accel.sh@21 -- # val= 00:06:38.733 05:04:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # IFS=: 00:06:38.733 05:04:35 -- accel/accel.sh@20 -- # read -r var val 00:06:38.733 05:04:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.733 05:04:35 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:38.733 05:04:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.733 00:06:38.733 real 0m2.682s 00:06:38.733 user 0m2.470s 00:06:38.733 sys 0m0.212s 00:06:38.733 05:04:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.733 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:06:38.733 ************************************ 00:06:38.733 END TEST accel_xor 00:06:38.733 ************************************ 00:06:38.733 05:04:35 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:38.733 05:04:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:38.733 05:04:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.733 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:06:38.733 ************************************ 00:06:38.733 START TEST accel_xor 00:06:38.733 ************************************ 00:06:38.733 05:04:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:38.733 05:04:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.733 05:04:35 -- accel/accel.sh@17 -- # local accel_module 00:06:38.733 05:04:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:38.733 05:04:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:38.733 05:04:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.733 05:04:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.733 05:04:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.733 05:04:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.733 05:04:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.733 05:04:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.733 05:04:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.733 05:04:35 -- accel/accel.sh@42 -- # jq -r . 00:06:38.733 [2024-11-20 05:04:35.238910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.733 [2024-11-20 05:04:35.238980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122864 ] 00:06:38.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.733 [2024-11-20 05:04:35.298221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.733 [2024-11-20 05:04:35.365501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.114 05:04:36 -- accel/accel.sh@18 -- # out=' 00:06:40.114 SPDK Configuration: 00:06:40.114 Core mask: 0x1 00:06:40.114 00:06:40.114 Accel Perf Configuration: 00:06:40.114 Workload Type: xor 00:06:40.114 Source buffers: 3 00:06:40.114 Transfer size: 4096 bytes 00:06:40.114 Vector count 1 00:06:40.114 Module: software 00:06:40.114 Queue depth: 32 00:06:40.114 Allocate depth: 32 00:06:40.114 # threads/core: 1 00:06:40.114 Run time: 1 seconds 00:06:40.114 Verify: Yes 00:06:40.114 00:06:40.114 Running for 1 seconds... 00:06:40.114 00:06:40.114 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.114 ------------------------------------------------------------------------------------ 00:06:40.114 0,0 479616/s 1873 MiB/s 0 0 00:06:40.114 ==================================================================================== 00:06:40.114 Total 479616/s 1873 MiB/s 0 0' 00:06:40.114 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.114 05:04:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:40.114 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.114 05:04:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:40.114 05:04:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.114 05:04:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.114 05:04:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.114 05:04:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.114 05:04:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.114 05:04:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.114 05:04:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.114 05:04:36 -- accel/accel.sh@42 -- # jq -r . 00:06:40.114 [2024-11-20 05:04:36.575022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.115 [2024-11-20 05:04:36.575077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123081 ] 00:06:40.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.115 [2024-11-20 05:04:36.629717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.115 [2024-11-20 05:04:36.698122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val= 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val= 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val=0x1 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val= 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val= 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val=xor 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val=3 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val= 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val=software 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val=32 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val=32 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val=1 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val=Yes 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val= 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.115 05:04:36 -- accel/accel.sh@21 -- # val= 00:06:40.115 05:04:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # IFS=: 00:06:40.115 05:04:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.497 05:04:37 -- accel/accel.sh@21 -- # val= 00:06:41.497 05:04:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # IFS=: 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # read -r var val 00:06:41.497 05:04:37 -- accel/accel.sh@21 -- # val= 00:06:41.497 05:04:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # IFS=: 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # read -r var val 00:06:41.497 05:04:37 -- accel/accel.sh@21 -- # val= 00:06:41.497 05:04:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # IFS=: 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # read -r var val 00:06:41.497 05:04:37 -- accel/accel.sh@21 -- # val= 00:06:41.497 05:04:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # IFS=: 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # read -r var val 00:06:41.497 05:04:37 -- accel/accel.sh@21 -- # val= 00:06:41.497 05:04:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # IFS=: 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # read -r var val 00:06:41.497 05:04:37 -- accel/accel.sh@21 -- # val= 00:06:41.497 05:04:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # IFS=: 00:06:41.497 05:04:37 -- accel/accel.sh@20 -- # read -r var val 00:06:41.497 05:04:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.497 05:04:37 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:41.497 05:04:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.497 00:06:41.497 real 0m2.685s 00:06:41.497 user 0m2.468s 00:06:41.497 sys 0m0.216s 00:06:41.497 05:04:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.497 05:04:37 -- common/autotest_common.sh@10 -- # set +x 00:06:41.497 ************************************ 00:06:41.497 END TEST accel_xor 00:06:41.497 ************************************ 00:06:41.497 05:04:37 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:41.497 05:04:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:41.497 05:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.497 05:04:37 -- common/autotest_common.sh@10 -- # set +x 00:06:41.497 ************************************ 00:06:41.497 START TEST accel_dif_verify 00:06:41.497 ************************************ 00:06:41.497 05:04:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:41.497 05:04:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.497 05:04:37 -- accel/accel.sh@17 -- # local accel_module 00:06:41.497 05:04:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:41.497 05:04:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:41.497 05:04:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.497 05:04:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.497 05:04:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.497 05:04:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.497 05:04:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.497 05:04:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.497 05:04:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.497 05:04:37 -- accel/accel.sh@42 -- # jq -r . 00:06:41.497 [2024-11-20 05:04:37.953205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.497 [2024-11-20 05:04:37.953264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123327 ] 00:06:41.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.497 [2024-11-20 05:04:38.008382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.497 [2024-11-20 05:04:38.075635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.880 05:04:39 -- accel/accel.sh@18 -- # out=' 00:06:42.880 SPDK Configuration: 00:06:42.880 Core mask: 0x1 00:06:42.880 00:06:42.880 Accel Perf Configuration: 00:06:42.880 Workload Type: dif_verify 00:06:42.880 Vector size: 4096 bytes 00:06:42.880 Transfer size: 4096 bytes 00:06:42.880 Block size: 512 bytes 00:06:42.880 Metadata size: 8 bytes 00:06:42.880 Vector count 1 00:06:42.880 Module: software 00:06:42.880 Queue depth: 32 00:06:42.880 Allocate depth: 32 00:06:42.880 # threads/core: 1 00:06:42.880 Run time: 1 seconds 00:06:42.880 Verify: No 00:06:42.880 00:06:42.880 Running for 1 seconds... 00:06:42.880 00:06:42.880 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.880 ------------------------------------------------------------------------------------ 00:06:42.880 0,0 137024/s 543 MiB/s 0 0 00:06:42.880 ==================================================================================== 00:06:42.880 Total 137024/s 535 MiB/s 0 0' 00:06:42.880 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.880 05:04:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:42.880 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.880 05:04:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:42.880 05:04:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.880 05:04:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.880 05:04:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.880 05:04:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.880 05:04:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.880 05:04:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.880 05:04:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.880 05:04:39 -- accel/accel.sh@42 -- # jq -r . 00:06:42.880 [2024-11-20 05:04:39.282514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.881 [2024-11-20 05:04:39.282563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123537 ] 00:06:42.881 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.881 [2024-11-20 05:04:39.337447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.881 [2024-11-20 05:04:39.403893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val= 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val= 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val=0x1 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val= 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val= 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val=dif_verify 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val= 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val=software 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val=32 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val=32 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val=1 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val=No 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val= 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.881 05:04:39 -- accel/accel.sh@21 -- # val= 00:06:42.881 05:04:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.881 05:04:39 -- accel/accel.sh@20 -- # read -r var val 00:06:43.823 05:04:40 -- accel/accel.sh@21 -- # val= 00:06:43.823 05:04:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # IFS=: 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # read -r var val 00:06:43.823 05:04:40 -- accel/accel.sh@21 -- # val= 00:06:43.823 05:04:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # IFS=: 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # read -r var val 00:06:43.823 05:04:40 -- accel/accel.sh@21 -- # val= 00:06:43.823 05:04:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # IFS=: 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # read -r var val 00:06:43.823 05:04:40 -- accel/accel.sh@21 -- # val= 00:06:43.823 05:04:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # IFS=: 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # read -r var val 00:06:43.823 05:04:40 -- accel/accel.sh@21 -- # val= 00:06:43.823 05:04:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # IFS=: 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # read -r var val 00:06:43.823 05:04:40 -- accel/accel.sh@21 -- # val= 00:06:43.823 05:04:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # IFS=: 00:06:43.823 05:04:40 -- accel/accel.sh@20 -- # read -r var val 00:06:43.823 05:04:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.823 05:04:40 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:43.823 05:04:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.823 00:06:43.823 real 0m2.674s 00:06:43.823 user 0m2.462s 00:06:43.823 sys 0m0.213s 00:06:43.823 05:04:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.823 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:43.823 ************************************ 00:06:43.823 END TEST accel_dif_verify 00:06:43.823 ************************************ 00:06:43.823 05:04:40 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:43.823 05:04:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:43.823 05:04:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.823 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:43.823 ************************************ 00:06:43.823 START TEST accel_dif_generate 00:06:43.823 ************************************ 00:06:43.823 05:04:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:43.823 05:04:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.823 05:04:40 -- accel/accel.sh@17 -- # local accel_module 00:06:43.823 05:04:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:43.823 05:04:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:43.823 05:04:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.823 05:04:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.823 05:04:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.823 05:04:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.823 05:04:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.823 05:04:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.823 05:04:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.824 05:04:40 -- accel/accel.sh@42 -- # jq -r . 00:06:44.083 [2024-11-20 05:04:40.656838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.083 [2024-11-20 05:04:40.656896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123775 ] 00:06:44.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.083 [2024-11-20 05:04:40.712601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.083 [2024-11-20 05:04:40.780980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.464 05:04:41 -- accel/accel.sh@18 -- # out=' 00:06:45.464 SPDK Configuration: 00:06:45.464 Core mask: 0x1 00:06:45.464 00:06:45.464 Accel Perf Configuration: 00:06:45.464 Workload Type: dif_generate 00:06:45.464 Vector size: 4096 bytes 00:06:45.464 Transfer size: 4096 bytes 00:06:45.464 Block size: 512 bytes 00:06:45.464 Metadata size: 8 bytes 00:06:45.464 Vector count 1 00:06:45.464 Module: software 00:06:45.464 Queue depth: 32 00:06:45.464 Allocate depth: 32 00:06:45.464 # threads/core: 1 00:06:45.464 Run time: 1 seconds 00:06:45.464 Verify: No 00:06:45.464 00:06:45.464 Running for 1 seconds... 00:06:45.464 00:06:45.464 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.464 ------------------------------------------------------------------------------------ 00:06:45.464 0,0 162208/s 643 MiB/s 0 0 00:06:45.464 ==================================================================================== 00:06:45.464 Total 162208/s 633 MiB/s 0 0' 00:06:45.464 05:04:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.464 05:04:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.464 05:04:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:45.464 05:04:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:45.464 05:04:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.464 05:04:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.464 05:04:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.465 05:04:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.465 05:04:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.465 05:04:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.465 05:04:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.465 05:04:41 -- accel/accel.sh@42 -- # jq -r . 00:06:45.465 [2024-11-20 05:04:42.001856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.465 [2024-11-20 05:04:42.001917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123992 ] 00:06:45.465 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.465 [2024-11-20 05:04:42.057600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.465 [2024-11-20 05:04:42.125942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val= 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val= 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val=0x1 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val= 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val= 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val=dif_generate 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val= 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val=software 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val=32 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val=32 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val=1 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val=No 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val= 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.465 05:04:42 -- accel/accel.sh@21 -- # val= 00:06:45.465 05:04:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # IFS=: 00:06:45.465 05:04:42 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 05:04:43 -- accel/accel.sh@21 -- # val= 00:06:46.847 05:04:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 05:04:43 -- accel/accel.sh@21 -- # val= 00:06:46.847 05:04:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 05:04:43 -- accel/accel.sh@21 -- # val= 00:06:46.847 05:04:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 05:04:43 -- accel/accel.sh@21 -- # val= 00:06:46.847 05:04:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 05:04:43 -- accel/accel.sh@21 -- # val= 00:06:46.847 05:04:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 05:04:43 -- accel/accel.sh@21 -- # val= 00:06:46.847 05:04:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 05:04:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 05:04:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.847 05:04:43 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:46.847 05:04:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.847 00:06:46.847 real 0m2.693s 00:06:46.847 user 0m2.468s 00:06:46.847 sys 0m0.226s 00:06:46.847 05:04:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.847 05:04:43 -- common/autotest_common.sh@10 -- # set +x 00:06:46.847 ************************************ 00:06:46.847 END TEST accel_dif_generate 00:06:46.847 ************************************ 00:06:46.847 05:04:43 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:46.847 05:04:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:46.848 05:04:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.848 05:04:43 -- common/autotest_common.sh@10 -- # set +x 00:06:46.848 ************************************ 00:06:46.848 START TEST accel_dif_generate_copy 00:06:46.848 ************************************ 00:06:46.848 05:04:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:46.848 05:04:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.848 05:04:43 -- accel/accel.sh@17 -- # local accel_module 00:06:46.848 05:04:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.848 05:04:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.848 05:04:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.848 05:04:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.848 05:04:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.848 05:04:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.848 05:04:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.848 05:04:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.848 05:04:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.848 05:04:43 -- accel/accel.sh@42 -- # jq -r . 00:06:46.848 [2024-11-20 05:04:43.379930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.848 [2024-11-20 05:04:43.379996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124244 ] 00:06:46.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.848 [2024-11-20 05:04:43.436792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.848 [2024-11-20 05:04:43.506990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.229 05:04:44 -- accel/accel.sh@18 -- # out=' 00:06:48.229 SPDK Configuration: 00:06:48.229 Core mask: 0x1 00:06:48.229 00:06:48.229 Accel Perf Configuration: 00:06:48.229 Workload Type: dif_generate_copy 00:06:48.229 Vector size: 4096 bytes 00:06:48.229 Transfer size: 4096 bytes 00:06:48.229 Vector count 1 00:06:48.229 Module: software 00:06:48.229 Queue depth: 32 00:06:48.229 Allocate depth: 32 00:06:48.229 # threads/core: 1 00:06:48.229 Run time: 1 seconds 00:06:48.229 Verify: No 00:06:48.229 00:06:48.229 Running for 1 seconds... 00:06:48.229 00:06:48.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.230 ------------------------------------------------------------------------------------ 00:06:48.230 0,0 125920/s 499 MiB/s 0 0 00:06:48.230 ==================================================================================== 00:06:48.230 Total 125920/s 491 MiB/s 0 0' 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:48.230 05:04:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.230 05:04:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.230 05:04:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.230 05:04:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.230 05:04:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.230 05:04:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.230 05:04:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.230 05:04:44 -- accel/accel.sh@42 -- # jq -r . 00:06:48.230 [2024-11-20 05:04:44.715309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.230 [2024-11-20 05:04:44.715357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124477 ] 00:06:48.230 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.230 [2024-11-20 05:04:44.768490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.230 [2024-11-20 05:04:44.835733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val= 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val= 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val=0x1 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val= 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val= 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val= 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val=software 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val=32 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val=32 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val=1 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val=No 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val= 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.230 05:04:44 -- accel/accel.sh@21 -- # val= 00:06:48.230 05:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # IFS=: 00:06:48.230 05:04:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.611 05:04:46 -- accel/accel.sh@21 -- # val= 00:06:49.611 05:04:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # IFS=: 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # read -r var val 00:06:49.611 05:04:46 -- accel/accel.sh@21 -- # val= 00:06:49.611 05:04:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # IFS=: 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # read -r var val 00:06:49.611 05:04:46 -- accel/accel.sh@21 -- # val= 00:06:49.611 05:04:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # IFS=: 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # read -r var val 00:06:49.611 05:04:46 -- accel/accel.sh@21 -- # val= 00:06:49.611 05:04:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # IFS=: 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # read -r var val 00:06:49.611 05:04:46 -- accel/accel.sh@21 -- # val= 00:06:49.611 05:04:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # IFS=: 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # read -r var val 00:06:49.611 05:04:46 -- accel/accel.sh@21 -- # val= 00:06:49.611 05:04:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # IFS=: 00:06:49.611 05:04:46 -- accel/accel.sh@20 -- # read -r var val 00:06:49.611 05:04:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.611 05:04:46 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:49.611 05:04:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.611 00:06:49.611 real 0m2.679s 00:06:49.611 user 0m2.467s 00:06:49.611 sys 0m0.212s 00:06:49.611 05:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.611 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:49.611 ************************************ 00:06:49.611 END TEST accel_dif_generate_copy 00:06:49.611 ************************************ 00:06:49.611 05:04:46 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:49.612 05:04:46 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:49.612 05:04:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:49.612 05:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.612 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:49.612 ************************************ 00:06:49.612 START TEST accel_comp 00:06:49.612 ************************************ 00:06:49.612 05:04:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:49.612 05:04:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.612 05:04:46 -- accel/accel.sh@17 -- # local accel_module 00:06:49.612 05:04:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:49.612 05:04:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:49.612 05:04:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.612 05:04:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.612 05:04:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.612 05:04:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.612 05:04:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.612 05:04:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.612 05:04:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.612 05:04:46 -- accel/accel.sh@42 -- # jq -r . 00:06:49.612 [2024-11-20 05:04:46.088610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.612 [2024-11-20 05:04:46.088667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124725 ] 00:06:49.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.612 [2024-11-20 05:04:46.143759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.612 [2024-11-20 05:04:46.212145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.994 05:04:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.994 00:06:50.994 SPDK Configuration: 00:06:50.994 Core mask: 0x1 00:06:50.994 00:06:50.994 Accel Perf Configuration: 00:06:50.994 Workload Type: compress 00:06:50.994 Transfer size: 4096 bytes 00:06:50.994 Vector count 1 00:06:50.994 Module: software 00:06:50.994 File Name: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:50.994 Queue depth: 32 00:06:50.994 Allocate depth: 32 00:06:50.994 # threads/core: 1 00:06:50.994 Run time: 1 seconds 00:06:50.994 Verify: No 00:06:50.994 00:06:50.994 Running for 1 seconds... 00:06:50.994 00:06:50.994 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.994 ------------------------------------------------------------------------------------ 00:06:50.994 0,0 64288/s 268 MiB/s 0 0 00:06:50.994 ==================================================================================== 00:06:50.994 Total 64288/s 251 MiB/s 0 0' 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.994 05:04:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.994 05:04:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:50.994 05:04:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.994 05:04:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.994 05:04:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.994 05:04:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.994 05:04:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.994 05:04:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.994 05:04:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.994 05:04:47 -- accel/accel.sh@42 -- # jq -r . 00:06:50.994 [2024-11-20 05:04:47.420932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.994 [2024-11-20 05:04:47.420981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124944 ] 00:06:50.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.994 [2024-11-20 05:04:47.474945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.994 [2024-11-20 05:04:47.545384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.994 05:04:47 -- accel/accel.sh@21 -- # val= 00:06:50.994 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.994 05:04:47 -- accel/accel.sh@21 -- # val= 00:06:50.994 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.994 05:04:47 -- accel/accel.sh@21 -- # val= 00:06:50.994 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.994 05:04:47 -- accel/accel.sh@21 -- # val=0x1 00:06:50.994 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.994 05:04:47 -- accel/accel.sh@21 -- # val= 00:06:50.994 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.994 05:04:47 -- accel/accel.sh@21 -- # val= 00:06:50.994 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.994 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val=compress 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val= 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val=software 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val=32 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val=32 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val=1 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val=No 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val= 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.995 05:04:47 -- accel/accel.sh@21 -- # val= 00:06:50.995 05:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # IFS=: 00:06:50.995 05:04:47 -- accel/accel.sh@20 -- # read -r var val 00:06:51.935 05:04:48 -- accel/accel.sh@21 -- # val= 00:06:51.935 05:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.935 05:04:48 -- accel/accel.sh@21 -- # val= 00:06:51.935 05:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.935 05:04:48 -- accel/accel.sh@21 -- # val= 00:06:51.935 05:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.935 05:04:48 -- accel/accel.sh@21 -- # val= 00:06:51.935 05:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.935 05:04:48 -- accel/accel.sh@21 -- # val= 00:06:51.935 05:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.935 05:04:48 -- accel/accel.sh@21 -- # val= 00:06:51.935 05:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.935 05:04:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.935 05:04:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.935 05:04:48 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:51.935 05:04:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.935 00:06:51.935 real 0m2.683s 00:06:51.935 user 0m2.465s 00:06:51.935 sys 0m0.219s 00:06:51.935 05:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.935 05:04:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.935 ************************************ 00:06:51.935 END TEST accel_comp 00:06:51.935 ************************************ 00:06:52.195 05:04:48 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:52.195 05:04:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:52.195 05:04:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.195 05:04:48 -- common/autotest_common.sh@10 -- # set +x 00:06:52.195 ************************************ 00:06:52.195 START TEST accel_decomp 00:06:52.195 ************************************ 00:06:52.195 05:04:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:52.195 05:04:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.195 05:04:48 -- accel/accel.sh@17 -- # local accel_module 00:06:52.195 05:04:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:52.195 05:04:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:52.195 05:04:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.195 05:04:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.195 05:04:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.195 05:04:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.195 05:04:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.195 05:04:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.195 05:04:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.195 05:04:48 -- accel/accel.sh@42 -- # jq -r . 00:06:52.195 [2024-11-20 05:04:48.806701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.195 [2024-11-20 05:04:48.806760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125198 ] 00:06:52.195 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.195 [2024-11-20 05:04:48.862552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.195 [2024-11-20 05:04:48.930612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.576 05:04:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:53.576 00:06:53.576 SPDK Configuration: 00:06:53.576 Core mask: 0x1 00:06:53.576 00:06:53.576 Accel Perf Configuration: 00:06:53.576 Workload Type: decompress 00:06:53.576 Transfer size: 4096 bytes 00:06:53.576 Vector count 1 00:06:53.576 Module: software 00:06:53.576 File Name: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:53.576 Queue depth: 32 00:06:53.576 Allocate depth: 32 00:06:53.576 # threads/core: 1 00:06:53.576 Run time: 1 seconds 00:06:53.576 Verify: Yes 00:06:53.576 00:06:53.576 Running for 1 seconds... 00:06:53.576 00:06:53.576 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.576 ------------------------------------------------------------------------------------ 00:06:53.576 0,0 72000/s 132 MiB/s 0 0 00:06:53.576 ==================================================================================== 00:06:53.576 Total 72000/s 281 MiB/s 0 0' 00:06:53.576 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.576 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.576 05:04:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:53.576 05:04:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:53.576 05:04:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.576 05:04:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.576 05:04:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.576 05:04:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.576 05:04:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.576 05:04:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.576 05:04:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.576 05:04:50 -- accel/accel.sh@42 -- # jq -r . 00:06:53.576 [2024-11-20 05:04:50.156416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.576 [2024-11-20 05:04:50.156484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125438 ] 00:06:53.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.576 [2024-11-20 05:04:50.213830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.576 [2024-11-20 05:04:50.288136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.576 05:04:50 -- accel/accel.sh@21 -- # val= 00:06:53.576 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.576 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.576 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.576 05:04:50 -- accel/accel.sh@21 -- # val= 00:06:53.576 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.576 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.576 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.576 05:04:50 -- accel/accel.sh@21 -- # val= 00:06:53.576 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.576 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.576 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.576 05:04:50 -- accel/accel.sh@21 -- # val=0x1 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val= 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val= 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val=decompress 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val= 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val=software 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val=32 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val=32 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val=1 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val=Yes 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val= 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.577 05:04:50 -- accel/accel.sh@21 -- # val= 00:06:53.577 05:04:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # IFS=: 00:06:53.577 05:04:50 -- accel/accel.sh@20 -- # read -r var val 00:06:54.958 05:04:51 -- accel/accel.sh@21 -- # val= 00:06:54.958 05:04:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # IFS=: 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # read -r var val 00:06:54.958 05:04:51 -- accel/accel.sh@21 -- # val= 00:06:54.958 05:04:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # IFS=: 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # read -r var val 00:06:54.958 05:04:51 -- accel/accel.sh@21 -- # val= 00:06:54.958 05:04:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # IFS=: 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # read -r var val 00:06:54.958 05:04:51 -- accel/accel.sh@21 -- # val= 00:06:54.958 05:04:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # IFS=: 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # read -r var val 00:06:54.958 05:04:51 -- accel/accel.sh@21 -- # val= 00:06:54.958 05:04:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # IFS=: 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # read -r var val 00:06:54.958 05:04:51 -- accel/accel.sh@21 -- # val= 00:06:54.958 05:04:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # IFS=: 00:06:54.958 05:04:51 -- accel/accel.sh@20 -- # read -r var val 00:06:54.958 05:04:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.958 05:04:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:54.959 05:04:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.959 00:06:54.959 real 0m2.706s 00:06:54.959 user 0m2.485s 00:06:54.959 sys 0m0.222s 00:06:54.959 05:04:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.959 05:04:51 -- common/autotest_common.sh@10 -- # set +x 00:06:54.959 ************************************ 00:06:54.959 END TEST accel_decomp 00:06:54.959 ************************************ 00:06:54.959 05:04:51 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.959 05:04:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:54.959 05:04:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.959 05:04:51 -- common/autotest_common.sh@10 -- # set +x 00:06:54.959 ************************************ 00:06:54.959 START TEST accel_decmop_full 00:06:54.959 ************************************ 00:06:54.959 05:04:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.959 05:04:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.959 05:04:51 -- accel/accel.sh@17 -- # local accel_module 00:06:54.959 05:04:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.959 05:04:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.959 05:04:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.959 05:04:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.959 05:04:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.959 05:04:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.959 05:04:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.959 05:04:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.959 05:04:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.959 05:04:51 -- accel/accel.sh@42 -- # jq -r . 00:06:54.959 [2024-11-20 05:04:51.548350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.959 [2024-11-20 05:04:51.548430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125685 ] 00:06:54.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.959 [2024-11-20 05:04:51.604231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.959 [2024-11-20 05:04:51.672536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.342 05:04:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:56.342 00:06:56.342 SPDK Configuration: 00:06:56.342 Core mask: 0x1 00:06:56.342 00:06:56.342 Accel Perf Configuration: 00:06:56.342 Workload Type: decompress 00:06:56.342 Transfer size: 111250 bytes 00:06:56.342 Vector count 1 00:06:56.342 Module: software 00:06:56.342 File Name: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:56.342 Queue depth: 32 00:06:56.342 Allocate depth: 32 00:06:56.342 # threads/core: 1 00:06:56.342 Run time: 1 seconds 00:06:56.342 Verify: Yes 00:06:56.342 00:06:56.342 Running for 1 seconds... 00:06:56.342 00:06:56.342 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.342 ------------------------------------------------------------------------------------ 00:06:56.342 0,0 5024/s 207 MiB/s 0 0 00:06:56.342 ==================================================================================== 00:06:56.342 Total 5024/s 533 MiB/s 0 0' 00:06:56.342 05:04:52 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:52 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:56.342 05:04:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:56.342 05:04:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.342 05:04:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.342 05:04:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.342 05:04:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.342 05:04:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.342 05:04:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.342 05:04:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.342 05:04:52 -- accel/accel.sh@42 -- # jq -r . 00:06:56.342 [2024-11-20 05:04:52.905096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.342 [2024-11-20 05:04:52.905157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125909 ] 00:06:56.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.342 [2024-11-20 05:04:52.961135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.342 [2024-11-20 05:04:53.028419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val= 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val= 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val= 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val=0x1 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val= 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val= 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val=decompress 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val= 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val=software 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val=32 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val=32 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val=1 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val=Yes 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val= 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.342 05:04:53 -- accel/accel.sh@21 -- # val= 00:06:56.342 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.342 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.723 05:04:54 -- accel/accel.sh@21 -- # val= 00:06:57.723 05:04:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.723 05:04:54 -- accel/accel.sh@21 -- # val= 00:06:57.723 05:04:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.723 05:04:54 -- accel/accel.sh@21 -- # val= 00:06:57.723 05:04:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.723 05:04:54 -- accel/accel.sh@21 -- # val= 00:06:57.723 05:04:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.723 05:04:54 -- accel/accel.sh@21 -- # val= 00:06:57.723 05:04:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.723 05:04:54 -- accel/accel.sh@21 -- # val= 00:06:57.723 05:04:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.723 05:04:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.723 05:04:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.723 05:04:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:57.723 05:04:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.723 00:06:57.723 real 0m2.711s 00:06:57.723 user 0m2.482s 00:06:57.723 sys 0m0.227s 00:06:57.723 05:04:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.723 05:04:54 -- common/autotest_common.sh@10 -- # set +x 00:06:57.723 ************************************ 00:06:57.723 END TEST accel_decmop_full 00:06:57.723 ************************************ 00:06:57.723 05:04:54 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.723 05:04:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:57.723 05:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.723 05:04:54 -- common/autotest_common.sh@10 -- # set +x 00:06:57.723 ************************************ 00:06:57.723 START TEST accel_decomp_mcore 00:06:57.723 ************************************ 00:06:57.723 05:04:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.723 05:04:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.723 05:04:54 -- accel/accel.sh@17 -- # local accel_module 00:06:57.723 05:04:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.723 05:04:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.723 05:04:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.723 05:04:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.723 05:04:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.723 05:04:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.723 05:04:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.723 05:04:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.723 05:04:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.723 05:04:54 -- accel/accel.sh@42 -- # jq -r . 00:06:57.723 [2024-11-20 05:04:54.294359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.723 [2024-11-20 05:04:54.294429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126162 ] 00:06:57.723 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.723 [2024-11-20 05:04:54.354229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.723 [2024-11-20 05:04:54.424909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.723 [2024-11-20 05:04:54.425010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.723 [2024-11-20 05:04:54.425074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.723 [2024-11-20 05:04:54.425076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.105 05:04:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:59.105 00:06:59.105 SPDK Configuration: 00:06:59.105 Core mask: 0xf 00:06:59.105 00:06:59.105 Accel Perf Configuration: 00:06:59.105 Workload Type: decompress 00:06:59.105 Transfer size: 4096 bytes 00:06:59.105 Vector count 1 00:06:59.105 Module: software 00:06:59.105 File Name: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:59.105 Queue depth: 32 00:06:59.105 Allocate depth: 32 00:06:59.105 # threads/core: 1 00:06:59.105 Run time: 1 seconds 00:06:59.105 Verify: Yes 00:06:59.105 00:06:59.105 Running for 1 seconds... 00:06:59.105 00:06:59.105 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.105 ------------------------------------------------------------------------------------ 00:06:59.105 0,0 61152/s 112 MiB/s 0 0 00:06:59.105 3,0 63168/s 116 MiB/s 0 0 00:06:59.105 2,0 63136/s 116 MiB/s 0 0 00:06:59.105 1,0 62944/s 115 MiB/s 0 0 00:06:59.105 ==================================================================================== 00:06:59.105 Total 250400/s 978 MiB/s 0 0' 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.105 05:04:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.105 05:04:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.105 05:04:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.105 05:04:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.105 05:04:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.105 05:04:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.105 05:04:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.105 05:04:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.105 05:04:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.105 05:04:55 -- accel/accel.sh@42 -- # jq -r . 00:06:59.105 [2024-11-20 05:04:55.656322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.105 [2024-11-20 05:04:55.656388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126390 ] 00:06:59.105 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.105 [2024-11-20 05:04:55.714671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.105 [2024-11-20 05:04:55.784730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.105 [2024-11-20 05:04:55.784825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.105 [2024-11-20 05:04:55.784914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.105 [2024-11-20 05:04:55.784916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.105 05:04:55 -- accel/accel.sh@21 -- # val= 00:06:59.105 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.105 05:04:55 -- accel/accel.sh@21 -- # val= 00:06:59.105 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.105 05:04:55 -- accel/accel.sh@21 -- # val= 00:06:59.105 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.105 05:04:55 -- accel/accel.sh@21 -- # val=0xf 00:06:59.105 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.105 05:04:55 -- accel/accel.sh@21 -- # val= 00:06:59.105 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.105 05:04:55 -- accel/accel.sh@21 -- # val= 00:06:59.105 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.105 05:04:55 -- accel/accel.sh@21 -- # val=decompress 00:06:59.105 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.105 05:04:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.105 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.105 05:04:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val= 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val=software 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val=32 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val=32 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val=1 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val=Yes 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val= 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.106 05:04:55 -- accel/accel.sh@21 -- # val= 00:06:59.106 05:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:59.106 05:04:55 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:00.487 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.487 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.487 05:04:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.487 05:04:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:00.487 05:04:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.487 00:07:00.487 real 0m2.728s 00:07:00.487 user 0m9.153s 00:07:00.487 sys 0m0.241s 00:07:00.487 05:04:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.487 05:04:56 -- common/autotest_common.sh@10 -- # set +x 00:07:00.487 ************************************ 00:07:00.487 END TEST accel_decomp_mcore 00:07:00.487 ************************************ 00:07:00.487 05:04:57 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.487 05:04:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:00.487 05:04:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.487 05:04:57 -- common/autotest_common.sh@10 -- # set +x 00:07:00.487 ************************************ 00:07:00.487 START TEST accel_decomp_full_mcore 00:07:00.487 ************************************ 00:07:00.487 05:04:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.487 05:04:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.487 05:04:57 -- accel/accel.sh@17 -- # local accel_module 00:07:00.487 05:04:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.487 05:04:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.487 05:04:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.487 05:04:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.487 05:04:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.487 05:04:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.487 05:04:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.487 05:04:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.487 05:04:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.487 05:04:57 -- accel/accel.sh@42 -- # jq -r . 00:07:00.487 [2024-11-20 05:04:57.063704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.487 [2024-11-20 05:04:57.063784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126642 ] 00:07:00.487 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.487 [2024-11-20 05:04:57.123137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.487 [2024-11-20 05:04:57.194027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.487 [2024-11-20 05:04:57.194127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.487 [2024-11-20 05:04:57.194150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.487 [2024-11-20 05:04:57.194151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.869 05:04:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:01.869 00:07:01.869 SPDK Configuration: 00:07:01.869 Core mask: 0xf 00:07:01.869 00:07:01.869 Accel Perf Configuration: 00:07:01.869 Workload Type: decompress 00:07:01.869 Transfer size: 111250 bytes 00:07:01.869 Vector count 1 00:07:01.869 Module: software 00:07:01.869 File Name: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:07:01.869 Queue depth: 32 00:07:01.869 Allocate depth: 32 00:07:01.869 # threads/core: 1 00:07:01.869 Run time: 1 seconds 00:07:01.869 Verify: Yes 00:07:01.869 00:07:01.869 Running for 1 seconds... 00:07:01.869 00:07:01.869 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.869 ------------------------------------------------------------------------------------ 00:07:01.869 0,0 4640/s 191 MiB/s 0 0 00:07:01.869 3,0 4800/s 198 MiB/s 0 0 00:07:01.869 2,0 4800/s 198 MiB/s 0 0 00:07:01.869 1,0 4800/s 198 MiB/s 0 0 00:07:01.869 ==================================================================================== 00:07:01.869 Total 19040/s 2020 MiB/s 0 0' 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.869 05:04:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.869 05:04:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.869 05:04:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.869 05:04:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.869 05:04:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.869 05:04:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.869 05:04:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.869 05:04:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.869 05:04:58 -- accel/accel.sh@42 -- # jq -r . 00:07:01.869 [2024-11-20 05:04:58.439412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.869 [2024-11-20 05:04:58.439489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126880 ] 00:07:01.869 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.869 [2024-11-20 05:04:58.498221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.869 [2024-11-20 05:04:58.567657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.869 [2024-11-20 05:04:58.567757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.869 [2024-11-20 05:04:58.567857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.869 [2024-11-20 05:04:58.567860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val= 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val= 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val= 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val=0xf 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val= 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val= 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val=decompress 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val= 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val=software 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val=32 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val=32 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val=1 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val=Yes 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.869 05:04:58 -- accel/accel.sh@21 -- # val= 00:07:01.869 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.869 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.870 05:04:58 -- accel/accel.sh@21 -- # val= 00:07:01.870 05:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.870 05:04:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.870 05:04:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@21 -- # val= 00:07:03.253 05:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:03.253 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:03.253 05:04:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.253 05:04:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:03.253 05:04:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.253 00:07:03.253 real 0m2.758s 00:07:03.253 user 0m9.247s 00:07:03.253 sys 0m0.247s 00:07:03.253 05:04:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.253 05:04:59 -- common/autotest_common.sh@10 -- # set +x 00:07:03.253 ************************************ 00:07:03.253 END TEST accel_decomp_full_mcore 00:07:03.253 ************************************ 00:07:03.253 05:04:59 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.253 05:04:59 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:03.253 05:04:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.253 05:04:59 -- common/autotest_common.sh@10 -- # set +x 00:07:03.253 ************************************ 00:07:03.253 START TEST accel_decomp_mthread 00:07:03.253 ************************************ 00:07:03.253 05:04:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.253 05:04:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.253 05:04:59 -- accel/accel.sh@17 -- # local accel_module 00:07:03.253 05:04:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.253 05:04:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.253 05:04:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.253 05:04:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.253 05:04:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.253 05:04:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.253 05:04:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.253 05:04:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.253 05:04:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.253 05:04:59 -- accel/accel.sh@42 -- # jq -r . 00:07:03.253 [2024-11-20 05:04:59.858583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.253 [2024-11-20 05:04:59.858662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127137 ] 00:07:03.253 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.253 [2024-11-20 05:04:59.915477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.253 [2024-11-20 05:04:59.984105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.636 05:05:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.636 00:07:04.636 SPDK Configuration: 00:07:04.636 Core mask: 0x1 00:07:04.636 00:07:04.636 Accel Perf Configuration: 00:07:04.636 Workload Type: decompress 00:07:04.636 Transfer size: 4096 bytes 00:07:04.636 Vector count 1 00:07:04.636 Module: software 00:07:04.636 File Name: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:07:04.636 Queue depth: 32 00:07:04.636 Allocate depth: 32 00:07:04.636 # threads/core: 2 00:07:04.636 Run time: 1 seconds 00:07:04.636 Verify: Yes 00:07:04.636 00:07:04.636 Running for 1 seconds... 00:07:04.636 00:07:04.636 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.636 ------------------------------------------------------------------------------------ 00:07:04.636 0,1 36128/s 66 MiB/s 0 0 00:07:04.636 0,0 36000/s 66 MiB/s 0 0 00:07:04.636 ==================================================================================== 00:07:04.636 Total 72128/s 281 MiB/s 0 0' 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.636 05:05:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.636 05:05:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.636 05:05:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.636 05:05:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.636 05:05:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.636 05:05:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.636 05:05:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.636 05:05:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.636 05:05:01 -- accel/accel.sh@42 -- # jq -r . 00:07:04.636 [2024-11-20 05:05:01.213374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.636 [2024-11-20 05:05:01.213436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127363 ] 00:07:04.636 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.636 [2024-11-20 05:05:01.270356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.636 [2024-11-20 05:05:01.337557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val=0x1 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val=decompress 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val=software 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.636 05:05:01 -- accel/accel.sh@21 -- # val=32 00:07:04.636 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.636 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.637 05:05:01 -- accel/accel.sh@21 -- # val=32 00:07:04.637 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.637 05:05:01 -- accel/accel.sh@21 -- # val=2 00:07:04.637 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.637 05:05:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.637 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.637 05:05:01 -- accel/accel.sh@21 -- # val=Yes 00:07:04.637 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.637 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:04.637 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.637 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:04.637 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:04.637 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.018 05:05:02 -- accel/accel.sh@21 -- # val= 00:07:06.018 05:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # IFS=: 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # read -r var val 00:07:06.018 05:05:02 -- accel/accel.sh@21 -- # val= 00:07:06.018 05:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # IFS=: 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # read -r var val 00:07:06.018 05:05:02 -- accel/accel.sh@21 -- # val= 00:07:06.018 05:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # IFS=: 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # read -r var val 00:07:06.018 05:05:02 -- accel/accel.sh@21 -- # val= 00:07:06.018 05:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # IFS=: 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # read -r var val 00:07:06.018 05:05:02 -- accel/accel.sh@21 -- # val= 00:07:06.018 05:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # IFS=: 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # read -r var val 00:07:06.018 05:05:02 -- accel/accel.sh@21 -- # val= 00:07:06.018 05:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # IFS=: 00:07:06.018 05:05:02 -- accel/accel.sh@20 -- # read -r var val 00:07:06.018 05:05:02 -- accel/accel.sh@21 -- # val= 00:07:06.018 05:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.019 05:05:02 -- accel/accel.sh@20 -- # IFS=: 00:07:06.019 05:05:02 -- accel/accel.sh@20 -- # read -r var val 00:07:06.019 05:05:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.019 05:05:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.019 05:05:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.019 00:07:06.019 real 0m2.713s 00:07:06.019 user 0m2.481s 00:07:06.019 sys 0m0.241s 00:07:06.019 05:05:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.019 05:05:02 -- common/autotest_common.sh@10 -- # set +x 00:07:06.019 ************************************ 00:07:06.019 END TEST accel_decomp_mthread 00:07:06.019 ************************************ 00:07:06.019 05:05:02 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.019 05:05:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:06.019 05:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.019 05:05:02 -- common/autotest_common.sh@10 -- # set +x 00:07:06.019 ************************************ 00:07:06.019 START TEST accel_deomp_full_mthread 00:07:06.019 ************************************ 00:07:06.019 05:05:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.019 05:05:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.019 05:05:02 -- accel/accel.sh@17 -- # local accel_module 00:07:06.019 05:05:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.019 05:05:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.019 05:05:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.019 05:05:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.019 05:05:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.019 05:05:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.019 05:05:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.019 05:05:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.019 05:05:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.019 05:05:02 -- accel/accel.sh@42 -- # jq -r . 00:07:06.019 [2024-11-20 05:05:02.609293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.019 [2024-11-20 05:05:02.609357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127615 ] 00:07:06.019 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.019 [2024-11-20 05:05:02.667170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.019 [2024-11-20 05:05:02.735638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.401 05:05:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.401 00:07:07.401 SPDK Configuration: 00:07:07.401 Core mask: 0x1 00:07:07.401 00:07:07.401 Accel Perf Configuration: 00:07:07.401 Workload Type: decompress 00:07:07.401 Transfer size: 111250 bytes 00:07:07.401 Vector count 1 00:07:07.401 Module: software 00:07:07.401 File Name: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:07:07.401 Queue depth: 32 00:07:07.401 Allocate depth: 32 00:07:07.401 # threads/core: 2 00:07:07.401 Run time: 1 seconds 00:07:07.401 Verify: Yes 00:07:07.401 00:07:07.401 Running for 1 seconds... 00:07:07.401 00:07:07.401 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.401 ------------------------------------------------------------------------------------ 00:07:07.401 0,1 2528/s 104 MiB/s 0 0 00:07:07.401 0,0 2496/s 103 MiB/s 0 0 00:07:07.401 ==================================================================================== 00:07:07.401 Total 5024/s 533 MiB/s 0 0' 00:07:07.401 05:05:03 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:03 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.401 05:05:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.401 05:05:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.401 05:05:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.401 05:05:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.401 05:05:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.401 05:05:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.401 05:05:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.401 05:05:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.401 05:05:03 -- accel/accel.sh@42 -- # jq -r . 00:07:07.401 [2024-11-20 05:05:03.985563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.401 [2024-11-20 05:05:03.985624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127853 ] 00:07:07.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.401 [2024-11-20 05:05:04.040703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.401 [2024-11-20 05:05:04.108145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val= 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val= 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val= 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val=0x1 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val= 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val= 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val=decompress 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val= 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val=software 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val=32 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val=32 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val=2 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val=Yes 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.401 05:05:04 -- accel/accel.sh@21 -- # val= 00:07:07.401 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.401 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.402 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.402 05:05:04 -- accel/accel.sh@21 -- # val= 00:07:07.402 05:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.402 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.402 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.781 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:08.781 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.781 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:08.781 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.781 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:08.781 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.781 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:08.781 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.781 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:08.781 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.781 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:08.781 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.781 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:08.781 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:08.781 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.781 05:05:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.781 05:05:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:08.781 05:05:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.781 00:07:08.781 real 0m2.751s 00:07:08.781 user 0m2.539s 00:07:08.781 sys 0m0.219s 00:07:08.781 05:05:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.781 05:05:05 -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 ************************************ 00:07:08.781 END TEST accel_deomp_full_mthread 00:07:08.781 ************************************ 00:07:08.781 05:05:05 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:08.781 05:05:05 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.781 05:05:05 -- accel/accel.sh@129 -- # build_accel_config 00:07:08.781 05:05:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:08.781 05:05:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.781 05:05:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.781 05:05:05 -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 05:05:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.781 05:05:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.781 05:05:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.781 05:05:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.781 05:05:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.781 05:05:05 -- accel/accel.sh@42 -- # jq -r . 00:07:08.781 ************************************ 00:07:08.781 START TEST accel_dif_functional_tests 00:07:08.781 ************************************ 00:07:08.781 05:05:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.782 [2024-11-20 05:05:05.413817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.782 [2024-11-20 05:05:05.413867] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128115 ] 00:07:08.782 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.782 [2024-11-20 05:05:05.469122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.782 [2024-11-20 05:05:05.538835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.782 [2024-11-20 05:05:05.538931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.782 [2024-11-20 05:05:05.538933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.782 00:07:08.782 00:07:08.782 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.782 http://cunit.sourceforge.net/ 00:07:08.782 00:07:08.782 00:07:08.782 Suite: accel_dif 00:07:08.782 Test: verify: DIF generated, GUARD check ...passed 00:07:08.782 Test: verify: DIF generated, APPTAG check ...passed 00:07:08.782 Test: verify: DIF generated, REFTAG check ...passed 00:07:08.782 Test: verify: DIF not generated, GUARD check ...[2024-11-20 05:05:05.607026] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.782 [2024-11-20 05:05:05.607074] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.782 passed 00:07:08.782 Test: verify: DIF not generated, APPTAG check ...[2024-11-20 05:05:05.607103] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.782 [2024-11-20 05:05:05.607117] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.782 passed 00:07:08.782 Test: verify: DIF not generated, REFTAG check ...[2024-11-20 05:05:05.607135] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.782 [2024-11-20 05:05:05.607149] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.782 passed 00:07:08.782 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:08.782 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-20 05:05:05.607187] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:08.782 passed 00:07:08.782 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:08.782 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:08.782 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:08.782 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-20 05:05:05.607284] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:08.782 passed 00:07:08.782 Test: generate copy: DIF generated, GUARD check ...passed 00:07:08.782 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:08.782 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:08.782 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:08.782 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:08.782 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:08.782 Test: generate copy: iovecs-len validate ...[2024-11-20 05:05:05.607446] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:08.782 passed 00:07:08.782 Test: generate copy: buffer alignment validate ...passed 00:07:08.782 00:07:08.782 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.782 suites 1 1 n/a 0 0 00:07:08.782 tests 20 20 20 0 0 00:07:08.782 asserts 204 204 204 0 n/a 00:07:08.782 00:07:08.782 Elapsed time = 0.002 seconds 00:07:09.042 00:07:09.042 real 0m0.421s 00:07:09.042 user 0m0.630s 00:07:09.042 sys 0m0.147s 00:07:09.042 05:05:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.042 05:05:05 -- common/autotest_common.sh@10 -- # set +x 00:07:09.042 ************************************ 00:07:09.042 END TEST accel_dif_functional_tests 00:07:09.042 ************************************ 00:07:09.042 00:07:09.042 real 0m57.566s 00:07:09.042 user 1m6.130s 00:07:09.042 sys 0m6.040s 00:07:09.042 05:05:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.042 05:05:05 -- common/autotest_common.sh@10 -- # set +x 00:07:09.042 ************************************ 00:07:09.042 END TEST accel 00:07:09.042 ************************************ 00:07:09.042 05:05:05 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:09.042 05:05:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.042 05:05:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.042 05:05:05 -- common/autotest_common.sh@10 -- # set +x 00:07:09.042 ************************************ 00:07:09.042 START TEST accel_rpc 00:07:09.042 ************************************ 00:07:09.042 05:05:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:09.302 * Looking for test storage... 00:07:09.302 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel 00:07:09.302 05:05:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:09.302 05:05:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:09.302 05:05:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:09.302 05:05:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:09.302 05:05:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:09.302 05:05:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:09.302 05:05:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:09.302 05:05:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:09.302 05:05:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:09.302 05:05:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.302 05:05:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:09.302 05:05:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:09.302 05:05:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:09.302 05:05:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:09.302 05:05:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:09.302 05:05:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:09.302 05:05:06 -- scripts/common.sh@344 -- # : 1 00:07:09.302 05:05:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:09.302 05:05:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.302 05:05:06 -- scripts/common.sh@364 -- # decimal 1 00:07:09.302 05:05:06 -- scripts/common.sh@352 -- # local d=1 00:07:09.302 05:05:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.302 05:05:06 -- scripts/common.sh@354 -- # echo 1 00:07:09.302 05:05:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:09.302 05:05:06 -- scripts/common.sh@365 -- # decimal 2 00:07:09.302 05:05:06 -- scripts/common.sh@352 -- # local d=2 00:07:09.302 05:05:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.302 05:05:06 -- scripts/common.sh@354 -- # echo 2 00:07:09.303 05:05:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:09.303 05:05:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:09.303 05:05:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:09.303 05:05:06 -- scripts/common.sh@367 -- # return 0 00:07:09.303 05:05:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.303 05:05:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:09.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.303 --rc genhtml_branch_coverage=1 00:07:09.303 --rc genhtml_function_coverage=1 00:07:09.303 --rc genhtml_legend=1 00:07:09.303 --rc geninfo_all_blocks=1 00:07:09.303 --rc geninfo_unexecuted_blocks=1 00:07:09.303 00:07:09.303 ' 00:07:09.303 05:05:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:09.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.303 --rc genhtml_branch_coverage=1 00:07:09.303 --rc genhtml_function_coverage=1 00:07:09.303 --rc genhtml_legend=1 00:07:09.303 --rc geninfo_all_blocks=1 00:07:09.303 --rc geninfo_unexecuted_blocks=1 00:07:09.303 00:07:09.303 ' 00:07:09.303 05:05:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:09.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.303 --rc genhtml_branch_coverage=1 00:07:09.303 --rc genhtml_function_coverage=1 00:07:09.303 --rc genhtml_legend=1 00:07:09.303 --rc geninfo_all_blocks=1 00:07:09.303 --rc geninfo_unexecuted_blocks=1 00:07:09.303 00:07:09.303 ' 00:07:09.303 05:05:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:09.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.303 --rc genhtml_branch_coverage=1 00:07:09.303 --rc genhtml_function_coverage=1 00:07:09.303 --rc genhtml_legend=1 00:07:09.303 --rc geninfo_all_blocks=1 00:07:09.303 --rc geninfo_unexecuted_blocks=1 00:07:09.303 00:07:09.303 ' 00:07:09.303 05:05:06 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:09.303 05:05:06 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=128381 00:07:09.303 05:05:06 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:09.303 05:05:06 -- accel/accel_rpc.sh@15 -- # waitforlisten 128381 00:07:09.303 05:05:06 -- common/autotest_common.sh@829 -- # '[' -z 128381 ']' 00:07:09.303 05:05:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.303 05:05:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.303 05:05:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.303 05:05:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.303 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:07:09.303 [2024-11-20 05:05:06.064148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.303 [2024-11-20 05:05:06.064193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128381 ] 00:07:09.303 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.303 [2024-11-20 05:05:06.119193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.563 [2024-11-20 05:05:06.187653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.563 [2024-11-20 05:05:06.187769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.132 05:05:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.132 05:05:06 -- common/autotest_common.sh@862 -- # return 0 00:07:10.132 05:05:06 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:10.132 05:05:06 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:10.132 05:05:06 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:10.132 05:05:06 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:10.132 05:05:06 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:10.132 05:05:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.132 05:05:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.132 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:07:10.132 ************************************ 00:07:10.132 START TEST accel_assign_opcode 00:07:10.132 ************************************ 00:07:10.132 05:05:06 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:10.132 05:05:06 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:10.132 05:05:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.132 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:07:10.132 [2024-11-20 05:05:06.885798] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:10.132 05:05:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.132 05:05:06 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:10.132 05:05:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.132 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:07:10.132 [2024-11-20 05:05:06.897822] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:10.132 05:05:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.132 05:05:06 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:10.132 05:05:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.132 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:07:10.392 05:05:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.392 05:05:07 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:10.392 05:05:07 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:10.392 05:05:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.392 05:05:07 -- accel/accel_rpc.sh@42 -- # grep software 00:07:10.392 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:10.392 05:05:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.392 software 00:07:10.392 00:07:10.392 real 0m0.232s 00:07:10.392 user 0m0.037s 00:07:10.392 sys 0m0.013s 00:07:10.392 05:05:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.392 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:10.392 ************************************ 00:07:10.392 END TEST accel_assign_opcode 00:07:10.392 ************************************ 00:07:10.392 05:05:07 -- accel/accel_rpc.sh@55 -- # killprocess 128381 00:07:10.392 05:05:07 -- common/autotest_common.sh@936 -- # '[' -z 128381 ']' 00:07:10.392 05:05:07 -- common/autotest_common.sh@940 -- # kill -0 128381 00:07:10.392 05:05:07 -- common/autotest_common.sh@941 -- # uname 00:07:10.393 05:05:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.393 05:05:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128381 00:07:10.393 05:05:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.393 05:05:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.393 05:05:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128381' 00:07:10.393 killing process with pid 128381 00:07:10.393 05:05:07 -- common/autotest_common.sh@955 -- # kill 128381 00:07:10.393 05:05:07 -- common/autotest_common.sh@960 -- # wait 128381 00:07:10.962 00:07:10.962 real 0m1.667s 00:07:10.962 user 0m1.739s 00:07:10.962 sys 0m0.419s 00:07:10.962 05:05:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.962 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:10.962 ************************************ 00:07:10.962 END TEST accel_rpc 00:07:10.962 ************************************ 00:07:10.962 05:05:07 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.962 05:05:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.962 05:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.962 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:10.962 ************************************ 00:07:10.962 START TEST app_cmdline 00:07:10.962 ************************************ 00:07:10.962 05:05:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.962 * Looking for test storage... 00:07:10.962 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:07:10.962 05:05:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:10.962 05:05:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:10.962 05:05:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:10.962 05:05:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:10.962 05:05:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:10.962 05:05:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:10.962 05:05:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:10.962 05:05:07 -- scripts/common.sh@335 -- # IFS=.-: 00:07:10.962 05:05:07 -- scripts/common.sh@335 -- # read -ra ver1 00:07:10.962 05:05:07 -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.962 05:05:07 -- scripts/common.sh@336 -- # read -ra ver2 00:07:10.962 05:05:07 -- scripts/common.sh@337 -- # local 'op=<' 00:07:10.962 05:05:07 -- scripts/common.sh@339 -- # ver1_l=2 00:07:10.962 05:05:07 -- scripts/common.sh@340 -- # ver2_l=1 00:07:10.962 05:05:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:10.962 05:05:07 -- scripts/common.sh@343 -- # case "$op" in 00:07:10.962 05:05:07 -- scripts/common.sh@344 -- # : 1 00:07:10.962 05:05:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:10.962 05:05:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.962 05:05:07 -- scripts/common.sh@364 -- # decimal 1 00:07:10.962 05:05:07 -- scripts/common.sh@352 -- # local d=1 00:07:10.962 05:05:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.962 05:05:07 -- scripts/common.sh@354 -- # echo 1 00:07:10.962 05:05:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:10.962 05:05:07 -- scripts/common.sh@365 -- # decimal 2 00:07:10.962 05:05:07 -- scripts/common.sh@352 -- # local d=2 00:07:10.962 05:05:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.962 05:05:07 -- scripts/common.sh@354 -- # echo 2 00:07:10.962 05:05:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:10.962 05:05:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:10.962 05:05:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:10.962 05:05:07 -- scripts/common.sh@367 -- # return 0 00:07:10.962 05:05:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.962 05:05:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:10.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.962 --rc genhtml_branch_coverage=1 00:07:10.962 --rc genhtml_function_coverage=1 00:07:10.962 --rc genhtml_legend=1 00:07:10.962 --rc geninfo_all_blocks=1 00:07:10.962 --rc geninfo_unexecuted_blocks=1 00:07:10.962 00:07:10.962 ' 00:07:10.962 05:05:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:10.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.962 --rc genhtml_branch_coverage=1 00:07:10.962 --rc genhtml_function_coverage=1 00:07:10.962 --rc genhtml_legend=1 00:07:10.962 --rc geninfo_all_blocks=1 00:07:10.962 --rc geninfo_unexecuted_blocks=1 00:07:10.962 00:07:10.962 ' 00:07:10.962 05:05:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:10.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.962 --rc genhtml_branch_coverage=1 00:07:10.962 --rc genhtml_function_coverage=1 00:07:10.962 --rc genhtml_legend=1 00:07:10.962 --rc geninfo_all_blocks=1 00:07:10.962 --rc geninfo_unexecuted_blocks=1 00:07:10.962 00:07:10.962 ' 00:07:10.962 05:05:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:10.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.963 --rc genhtml_branch_coverage=1 00:07:10.963 --rc genhtml_function_coverage=1 00:07:10.963 --rc genhtml_legend=1 00:07:10.963 --rc geninfo_all_blocks=1 00:07:10.963 --rc geninfo_unexecuted_blocks=1 00:07:10.963 00:07:10.963 ' 00:07:10.963 05:05:07 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:10.963 05:05:07 -- app/cmdline.sh@17 -- # spdk_tgt_pid=128693 00:07:10.963 05:05:07 -- app/cmdline.sh@18 -- # waitforlisten 128693 00:07:10.963 05:05:07 -- common/autotest_common.sh@829 -- # '[' -z 128693 ']' 00:07:10.963 05:05:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.963 05:05:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.963 05:05:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.963 05:05:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.963 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:10.963 05:05:07 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:10.963 [2024-11-20 05:05:07.771967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.963 [2024-11-20 05:05:07.772017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128693 ] 00:07:11.223 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.223 [2024-11-20 05:05:07.826730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.223 [2024-11-20 05:05:07.902993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.223 [2024-11-20 05:05:07.903109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.793 05:05:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.793 05:05:08 -- common/autotest_common.sh@862 -- # return 0 00:07:11.793 05:05:08 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:12.054 { 00:07:12.054 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:12.054 "fields": { 00:07:12.054 "major": 24, 00:07:12.054 "minor": 1, 00:07:12.054 "patch": 1, 00:07:12.054 "suffix": "-pre", 00:07:12.054 "commit": "c13c99a5e" 00:07:12.054 } 00:07:12.054 } 00:07:12.054 05:05:08 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.054 05:05:08 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.054 05:05:08 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.054 05:05:08 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.054 05:05:08 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.054 05:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.054 05:05:08 -- common/autotest_common.sh@10 -- # set +x 00:07:12.054 05:05:08 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.054 05:05:08 -- app/cmdline.sh@26 -- # sort 00:07:12.054 05:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.054 05:05:08 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.054 05:05:08 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.054 05:05:08 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.054 05:05:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:12.054 05:05:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.054 05:05:08 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:12.054 05:05:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.054 05:05:08 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:12.054 05:05:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.054 05:05:08 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:12.054 05:05:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.054 05:05:08 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:07:12.054 05:05:08 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.054 05:05:08 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.314 request: 00:07:12.314 { 00:07:12.314 "method": "env_dpdk_get_mem_stats", 00:07:12.314 "req_id": 1 00:07:12.314 } 00:07:12.314 Got JSON-RPC error response 00:07:12.314 response: 00:07:12.314 { 00:07:12.314 "code": -32601, 00:07:12.314 "message": "Method not found" 00:07:12.314 } 00:07:12.314 05:05:08 -- common/autotest_common.sh@653 -- # es=1 00:07:12.314 05:05:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.314 05:05:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.314 05:05:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.314 05:05:08 -- app/cmdline.sh@1 -- # killprocess 128693 00:07:12.314 05:05:08 -- common/autotest_common.sh@936 -- # '[' -z 128693 ']' 00:07:12.314 05:05:08 -- common/autotest_common.sh@940 -- # kill -0 128693 00:07:12.314 05:05:08 -- common/autotest_common.sh@941 -- # uname 00:07:12.314 05:05:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.314 05:05:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128693 00:07:12.314 05:05:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.314 05:05:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.314 05:05:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128693' 00:07:12.314 killing process with pid 128693 00:07:12.314 05:05:09 -- common/autotest_common.sh@955 -- # kill 128693 00:07:12.314 05:05:09 -- common/autotest_common.sh@960 -- # wait 128693 00:07:12.588 00:07:12.588 real 0m1.773s 00:07:12.589 user 0m2.117s 00:07:12.589 sys 0m0.422s 00:07:12.589 05:05:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.589 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.589 ************************************ 00:07:12.589 END TEST app_cmdline 00:07:12.589 ************************************ 00:07:12.589 05:05:09 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:07:12.589 05:05:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.589 05:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.589 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.589 ************************************ 00:07:12.589 START TEST version 00:07:12.589 ************************************ 00:07:12.589 05:05:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:07:12.850 * Looking for test storage... 00:07:12.850 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:07:12.850 05:05:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:12.850 05:05:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:12.850 05:05:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:12.850 05:05:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:12.850 05:05:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:12.850 05:05:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:12.850 05:05:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:12.850 05:05:09 -- scripts/common.sh@335 -- # IFS=.-: 00:07:12.850 05:05:09 -- scripts/common.sh@335 -- # read -ra ver1 00:07:12.850 05:05:09 -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.850 05:05:09 -- scripts/common.sh@336 -- # read -ra ver2 00:07:12.850 05:05:09 -- scripts/common.sh@337 -- # local 'op=<' 00:07:12.850 05:05:09 -- scripts/common.sh@339 -- # ver1_l=2 00:07:12.850 05:05:09 -- scripts/common.sh@340 -- # ver2_l=1 00:07:12.850 05:05:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:12.850 05:05:09 -- scripts/common.sh@343 -- # case "$op" in 00:07:12.850 05:05:09 -- scripts/common.sh@344 -- # : 1 00:07:12.850 05:05:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:12.850 05:05:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.850 05:05:09 -- scripts/common.sh@364 -- # decimal 1 00:07:12.850 05:05:09 -- scripts/common.sh@352 -- # local d=1 00:07:12.850 05:05:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.850 05:05:09 -- scripts/common.sh@354 -- # echo 1 00:07:12.850 05:05:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:12.850 05:05:09 -- scripts/common.sh@365 -- # decimal 2 00:07:12.850 05:05:09 -- scripts/common.sh@352 -- # local d=2 00:07:12.850 05:05:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.850 05:05:09 -- scripts/common.sh@354 -- # echo 2 00:07:12.850 05:05:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:12.850 05:05:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:12.850 05:05:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:12.850 05:05:09 -- scripts/common.sh@367 -- # return 0 00:07:12.850 05:05:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.850 05:05:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.850 --rc genhtml_branch_coverage=1 00:07:12.850 --rc genhtml_function_coverage=1 00:07:12.850 --rc genhtml_legend=1 00:07:12.850 --rc geninfo_all_blocks=1 00:07:12.850 --rc geninfo_unexecuted_blocks=1 00:07:12.850 00:07:12.850 ' 00:07:12.850 05:05:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.850 --rc genhtml_branch_coverage=1 00:07:12.850 --rc genhtml_function_coverage=1 00:07:12.850 --rc genhtml_legend=1 00:07:12.850 --rc geninfo_all_blocks=1 00:07:12.850 --rc geninfo_unexecuted_blocks=1 00:07:12.850 00:07:12.850 ' 00:07:12.850 05:05:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.850 --rc genhtml_branch_coverage=1 00:07:12.850 --rc genhtml_function_coverage=1 00:07:12.850 --rc genhtml_legend=1 00:07:12.850 --rc geninfo_all_blocks=1 00:07:12.850 --rc geninfo_unexecuted_blocks=1 00:07:12.850 00:07:12.850 ' 00:07:12.850 05:05:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.850 --rc genhtml_branch_coverage=1 00:07:12.850 --rc genhtml_function_coverage=1 00:07:12.850 --rc genhtml_legend=1 00:07:12.850 --rc geninfo_all_blocks=1 00:07:12.850 --rc geninfo_unexecuted_blocks=1 00:07:12.850 00:07:12.850 ' 00:07:12.850 05:05:09 -- app/version.sh@17 -- # get_header_version major 00:07:12.850 05:05:09 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:07:12.850 05:05:09 -- app/version.sh@14 -- # cut -f2 00:07:12.850 05:05:09 -- app/version.sh@14 -- # tr -d '"' 00:07:12.850 05:05:09 -- app/version.sh@17 -- # major=24 00:07:12.850 05:05:09 -- app/version.sh@18 -- # get_header_version minor 00:07:12.850 05:05:09 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:07:12.850 05:05:09 -- app/version.sh@14 -- # cut -f2 00:07:12.850 05:05:09 -- app/version.sh@14 -- # tr -d '"' 00:07:12.850 05:05:09 -- app/version.sh@18 -- # minor=1 00:07:12.850 05:05:09 -- app/version.sh@19 -- # get_header_version patch 00:07:12.850 05:05:09 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:07:12.850 05:05:09 -- app/version.sh@14 -- # cut -f2 00:07:12.850 05:05:09 -- app/version.sh@14 -- # tr -d '"' 00:07:12.850 05:05:09 -- app/version.sh@19 -- # patch=1 00:07:12.850 05:05:09 -- app/version.sh@20 -- # get_header_version suffix 00:07:12.851 05:05:09 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:07:12.851 05:05:09 -- app/version.sh@14 -- # cut -f2 00:07:12.851 05:05:09 -- app/version.sh@14 -- # tr -d '"' 00:07:12.851 05:05:09 -- app/version.sh@20 -- # suffix=-pre 00:07:12.851 05:05:09 -- app/version.sh@22 -- # version=24.1 00:07:12.851 05:05:09 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:12.851 05:05:09 -- app/version.sh@25 -- # version=24.1.1 00:07:12.851 05:05:09 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:12.851 05:05:09 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:07:12.851 05:05:09 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:12.851 05:05:09 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:12.851 05:05:09 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:12.851 00:07:12.851 real 0m0.224s 00:07:12.851 user 0m0.137s 00:07:12.851 sys 0m0.127s 00:07:12.851 05:05:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.851 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.851 ************************************ 00:07:12.851 END TEST version 00:07:12.851 ************************************ 00:07:12.851 05:05:09 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:12.851 05:05:09 -- spdk/autotest.sh@191 -- # uname -s 00:07:12.851 05:05:09 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:12.851 05:05:09 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:12.851 05:05:09 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:12.851 05:05:09 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:12.851 05:05:09 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:12.851 05:05:09 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:12.851 05:05:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:12.851 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.851 05:05:09 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:12.851 05:05:09 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:12.851 05:05:09 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:12.851 05:05:09 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:12.851 05:05:09 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:07:12.851 05:05:09 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:12.851 05:05:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:12.851 05:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.851 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.851 ************************************ 00:07:12.851 START TEST nvmf_rdma 00:07:12.851 ************************************ 00:07:12.851 05:05:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:13.111 * Looking for test storage... 00:07:13.111 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:07:13.111 05:05:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:13.111 05:05:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:13.111 05:05:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:13.111 05:05:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:13.111 05:05:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:13.111 05:05:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:13.111 05:05:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:13.111 05:05:09 -- scripts/common.sh@335 -- # IFS=.-: 00:07:13.111 05:05:09 -- scripts/common.sh@335 -- # read -ra ver1 00:07:13.111 05:05:09 -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.111 05:05:09 -- scripts/common.sh@336 -- # read -ra ver2 00:07:13.111 05:05:09 -- scripts/common.sh@337 -- # local 'op=<' 00:07:13.111 05:05:09 -- scripts/common.sh@339 -- # ver1_l=2 00:07:13.111 05:05:09 -- scripts/common.sh@340 -- # ver2_l=1 00:07:13.111 05:05:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:13.111 05:05:09 -- scripts/common.sh@343 -- # case "$op" in 00:07:13.111 05:05:09 -- scripts/common.sh@344 -- # : 1 00:07:13.111 05:05:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:13.111 05:05:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.111 05:05:09 -- scripts/common.sh@364 -- # decimal 1 00:07:13.111 05:05:09 -- scripts/common.sh@352 -- # local d=1 00:07:13.111 05:05:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.111 05:05:09 -- scripts/common.sh@354 -- # echo 1 00:07:13.111 05:05:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:13.111 05:05:09 -- scripts/common.sh@365 -- # decimal 2 00:07:13.111 05:05:09 -- scripts/common.sh@352 -- # local d=2 00:07:13.111 05:05:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.111 05:05:09 -- scripts/common.sh@354 -- # echo 2 00:07:13.111 05:05:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:13.111 05:05:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:13.111 05:05:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:13.111 05:05:09 -- scripts/common.sh@367 -- # return 0 00:07:13.111 05:05:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.111 05:05:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:13.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.111 --rc genhtml_branch_coverage=1 00:07:13.111 --rc genhtml_function_coverage=1 00:07:13.111 --rc genhtml_legend=1 00:07:13.111 --rc geninfo_all_blocks=1 00:07:13.111 --rc geninfo_unexecuted_blocks=1 00:07:13.111 00:07:13.111 ' 00:07:13.111 05:05:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:13.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.111 --rc genhtml_branch_coverage=1 00:07:13.111 --rc genhtml_function_coverage=1 00:07:13.111 --rc genhtml_legend=1 00:07:13.111 --rc geninfo_all_blocks=1 00:07:13.111 --rc geninfo_unexecuted_blocks=1 00:07:13.111 00:07:13.111 ' 00:07:13.111 05:05:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:13.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.111 --rc genhtml_branch_coverage=1 00:07:13.111 --rc genhtml_function_coverage=1 00:07:13.111 --rc genhtml_legend=1 00:07:13.111 --rc geninfo_all_blocks=1 00:07:13.111 --rc geninfo_unexecuted_blocks=1 00:07:13.111 00:07:13.111 ' 00:07:13.111 05:05:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:13.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.111 --rc genhtml_branch_coverage=1 00:07:13.111 --rc genhtml_function_coverage=1 00:07:13.111 --rc genhtml_legend=1 00:07:13.111 --rc geninfo_all_blocks=1 00:07:13.111 --rc geninfo_unexecuted_blocks=1 00:07:13.111 00:07:13.111 ' 00:07:13.111 05:05:09 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:13.111 05:05:09 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.111 05:05:09 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.111 05:05:09 -- nvmf/common.sh@7 -- # uname -s 00:07:13.111 05:05:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.111 05:05:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.111 05:05:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.111 05:05:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.111 05:05:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.111 05:05:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.111 05:05:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.111 05:05:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.111 05:05:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.111 05:05:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.111 05:05:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:13.111 05:05:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:13.111 05:05:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.111 05:05:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.111 05:05:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:13.111 05:05:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:13.111 05:05:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.111 05:05:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.111 05:05:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.111 05:05:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.111 05:05:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.111 05:05:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.111 05:05:09 -- paths/export.sh@5 -- # export PATH 00:07:13.111 05:05:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.111 05:05:09 -- nvmf/common.sh@46 -- # : 0 00:07:13.111 05:05:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:13.111 05:05:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:13.111 05:05:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:13.111 05:05:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.111 05:05:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.111 05:05:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:13.111 05:05:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:13.111 05:05:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:13.111 05:05:09 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:13.111 05:05:09 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:13.111 05:05:09 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:13.111 05:05:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.111 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:07:13.111 05:05:09 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:13.111 05:05:09 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:13.111 05:05:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:13.111 05:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.112 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:07:13.112 ************************************ 00:07:13.112 START TEST nvmf_example 00:07:13.112 ************************************ 00:07:13.112 05:05:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:13.371 * Looking for test storage... 00:07:13.371 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:13.372 05:05:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:13.372 05:05:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:13.372 05:05:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:13.372 05:05:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:13.372 05:05:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:13.372 05:05:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:13.372 05:05:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:13.372 05:05:10 -- scripts/common.sh@335 -- # IFS=.-: 00:07:13.372 05:05:10 -- scripts/common.sh@335 -- # read -ra ver1 00:07:13.372 05:05:10 -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.372 05:05:10 -- scripts/common.sh@336 -- # read -ra ver2 00:07:13.372 05:05:10 -- scripts/common.sh@337 -- # local 'op=<' 00:07:13.372 05:05:10 -- scripts/common.sh@339 -- # ver1_l=2 00:07:13.372 05:05:10 -- scripts/common.sh@340 -- # ver2_l=1 00:07:13.372 05:05:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:13.372 05:05:10 -- scripts/common.sh@343 -- # case "$op" in 00:07:13.372 05:05:10 -- scripts/common.sh@344 -- # : 1 00:07:13.372 05:05:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:13.372 05:05:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.372 05:05:10 -- scripts/common.sh@364 -- # decimal 1 00:07:13.372 05:05:10 -- scripts/common.sh@352 -- # local d=1 00:07:13.372 05:05:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.372 05:05:10 -- scripts/common.sh@354 -- # echo 1 00:07:13.372 05:05:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:13.372 05:05:10 -- scripts/common.sh@365 -- # decimal 2 00:07:13.372 05:05:10 -- scripts/common.sh@352 -- # local d=2 00:07:13.372 05:05:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.372 05:05:10 -- scripts/common.sh@354 -- # echo 2 00:07:13.372 05:05:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:13.372 05:05:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:13.372 05:05:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:13.372 05:05:10 -- scripts/common.sh@367 -- # return 0 00:07:13.372 05:05:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.372 05:05:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:13.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.372 --rc genhtml_branch_coverage=1 00:07:13.372 --rc genhtml_function_coverage=1 00:07:13.372 --rc genhtml_legend=1 00:07:13.372 --rc geninfo_all_blocks=1 00:07:13.372 --rc geninfo_unexecuted_blocks=1 00:07:13.372 00:07:13.372 ' 00:07:13.372 05:05:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:13.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.372 --rc genhtml_branch_coverage=1 00:07:13.372 --rc genhtml_function_coverage=1 00:07:13.372 --rc genhtml_legend=1 00:07:13.372 --rc geninfo_all_blocks=1 00:07:13.372 --rc geninfo_unexecuted_blocks=1 00:07:13.372 00:07:13.372 ' 00:07:13.372 05:05:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:13.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.372 --rc genhtml_branch_coverage=1 00:07:13.372 --rc genhtml_function_coverage=1 00:07:13.372 --rc genhtml_legend=1 00:07:13.372 --rc geninfo_all_blocks=1 00:07:13.372 --rc geninfo_unexecuted_blocks=1 00:07:13.372 00:07:13.372 ' 00:07:13.372 05:05:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:13.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.372 --rc genhtml_branch_coverage=1 00:07:13.372 --rc genhtml_function_coverage=1 00:07:13.372 --rc genhtml_legend=1 00:07:13.372 --rc geninfo_all_blocks=1 00:07:13.372 --rc geninfo_unexecuted_blocks=1 00:07:13.372 00:07:13.372 ' 00:07:13.372 05:05:10 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.372 05:05:10 -- nvmf/common.sh@7 -- # uname -s 00:07:13.372 05:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.372 05:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.372 05:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.372 05:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.372 05:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.372 05:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.372 05:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.372 05:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.372 05:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.372 05:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.372 05:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:13.372 05:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:13.372 05:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.372 05:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.372 05:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:13.372 05:05:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:13.372 05:05:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.372 05:05:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.372 05:05:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.372 05:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.372 05:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.372 05:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.372 05:05:10 -- paths/export.sh@5 -- # export PATH 00:07:13.372 05:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.372 05:05:10 -- nvmf/common.sh@46 -- # : 0 00:07:13.372 05:05:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:13.372 05:05:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:13.372 05:05:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:13.372 05:05:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.372 05:05:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.372 05:05:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:13.372 05:05:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:13.372 05:05:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:13.372 05:05:10 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:13.372 05:05:10 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:13.372 05:05:10 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:13.372 05:05:10 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:13.372 05:05:10 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:13.372 05:05:10 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:13.372 05:05:10 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:13.372 05:05:10 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:13.372 05:05:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.372 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:07:13.372 05:05:10 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:13.372 05:05:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:13.372 05:05:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.372 05:05:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:13.372 05:05:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:13.372 05:05:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:13.372 05:05:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.372 05:05:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.372 05:05:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.372 05:05:10 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:07:13.372 05:05:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:13.372 05:05:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:13.372 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.648 05:05:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:18.648 05:05:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:18.648 05:05:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:18.648 05:05:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:18.648 05:05:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:18.648 05:05:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:18.648 05:05:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:18.648 05:05:15 -- nvmf/common.sh@294 -- # net_devs=() 00:07:18.648 05:05:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:18.648 05:05:15 -- nvmf/common.sh@295 -- # e810=() 00:07:18.648 05:05:15 -- nvmf/common.sh@295 -- # local -ga e810 00:07:18.648 05:05:15 -- nvmf/common.sh@296 -- # x722=() 00:07:18.648 05:05:15 -- nvmf/common.sh@296 -- # local -ga x722 00:07:18.648 05:05:15 -- nvmf/common.sh@297 -- # mlx=() 00:07:18.648 05:05:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:18.648 05:05:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.648 05:05:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:18.648 05:05:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:18.648 05:05:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:18.648 05:05:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:18.648 05:05:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:18.648 05:05:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:18.648 05:05:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:18.648 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:18.648 05:05:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:18.648 05:05:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:18.648 05:05:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:18.648 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:18.648 05:05:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:18.648 05:05:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:18.648 05:05:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:18.648 05:05:15 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:07:18.648 05:05:15 -- nvmf/common.sh@376 -- # modinfo irdma 00:07:18.648 05:05:15 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:07:18.648 05:05:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.649 05:05:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:18.649 05:05:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.649 05:05:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:18.649 Found net devices under 0000:af:00.0: cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.649 05:05:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.649 05:05:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:18.649 05:05:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.649 05:05:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:18.649 Found net devices under 0000:af:00.1: cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.649 05:05:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:18.649 05:05:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:18.649 05:05:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:18.649 05:05:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:18.649 05:05:15 -- nvmf/common.sh@57 -- # uname 00:07:18.649 05:05:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:18.649 05:05:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:18.649 05:05:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:18.649 05:05:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:18.649 05:05:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:18.649 05:05:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:18.649 05:05:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:18.649 05:05:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:18.649 05:05:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:18.649 05:05:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:18.649 05:05:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:18.649 05:05:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:18.649 05:05:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:18.649 05:05:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:18.649 05:05:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:18.649 05:05:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:18.649 05:05:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@104 -- # continue 2 00:07:18.649 05:05:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@104 -- # continue 2 00:07:18.649 05:05:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:18.649 05:05:15 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:18.649 05:05:15 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:18.649 05:05:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:07:18.649 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:18.649 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:18.649 altname enp175s0f0np0 00:07:18.649 altname ens801f0np0 00:07:18.649 inet 192.168.100.8/24 scope global cvl_0_0 00:07:18.649 valid_lft forever preferred_lft forever 00:07:18.649 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:18.649 valid_lft forever preferred_lft forever 00:07:18.649 05:05:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:18.649 05:05:15 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:18.649 05:05:15 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:18.649 05:05:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:07:18.649 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:18.649 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:18.649 altname enp175s0f1np1 00:07:18.649 altname ens801f1np1 00:07:18.649 inet 192.168.100.9/24 scope global cvl_0_1 00:07:18.649 valid_lft forever preferred_lft forever 00:07:18.649 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:18.649 valid_lft forever preferred_lft forever 00:07:18.649 05:05:15 -- nvmf/common.sh@410 -- # return 0 00:07:18.649 05:05:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:18.649 05:05:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:18.649 05:05:15 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:18.649 05:05:15 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:18.649 05:05:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:18.649 05:05:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:18.649 05:05:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:18.649 05:05:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:18.649 05:05:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:18.649 05:05:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@104 -- # continue 2 00:07:18.649 05:05:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.649 05:05:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:18.649 05:05:15 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@104 -- # continue 2 00:07:18.649 05:05:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:18.649 05:05:15 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:18.649 05:05:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:18.649 05:05:15 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:07:18.649 05:05:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:18.649 05:05:15 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:18.649 192.168.100.9' 00:07:18.649 05:05:15 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:18.649 192.168.100.9' 00:07:18.649 05:05:15 -- nvmf/common.sh@445 -- # head -n 1 00:07:18.650 05:05:15 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:18.650 05:05:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:18.650 192.168.100.9' 00:07:18.650 05:05:15 -- nvmf/common.sh@446 -- # tail -n +2 00:07:18.650 05:05:15 -- nvmf/common.sh@446 -- # head -n 1 00:07:18.650 05:05:15 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:18.650 05:05:15 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:18.650 05:05:15 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:18.650 05:05:15 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:18.650 05:05:15 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:18.650 05:05:15 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:18.650 05:05:15 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:18.650 05:05:15 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:18.650 05:05:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.650 05:05:15 -- common/autotest_common.sh@10 -- # set +x 00:07:18.650 05:05:15 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:18.650 05:05:15 -- target/nvmf_example.sh@34 -- # nvmfpid=132306 00:07:18.650 05:05:15 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:18.650 05:05:15 -- target/nvmf_example.sh@36 -- # waitforlisten 132306 00:07:18.650 05:05:15 -- common/autotest_common.sh@829 -- # '[' -z 132306 ']' 00:07:18.650 05:05:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.650 05:05:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.650 05:05:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.650 05:05:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.650 05:05:15 -- common/autotest_common.sh@10 -- # set +x 00:07:18.650 05:05:15 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:18.650 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.588 05:05:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.588 05:05:16 -- common/autotest_common.sh@862 -- # return 0 00:07:19.588 05:05:16 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:19.588 05:05:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.588 05:05:16 -- common/autotest_common.sh@10 -- # set +x 00:07:19.588 05:05:16 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:19.588 05:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.588 05:05:16 -- common/autotest_common.sh@10 -- # set +x 00:07:19.588 05:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.588 05:05:16 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:19.588 05:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.588 05:05:16 -- common/autotest_common.sh@10 -- # set +x 00:07:19.588 05:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.588 05:05:16 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:19.588 05:05:16 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:19.588 05:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.588 05:05:16 -- common/autotest_common.sh@10 -- # set +x 00:07:19.588 05:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.588 05:05:16 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:19.588 05:05:16 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:19.588 05:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.588 05:05:16 -- common/autotest_common.sh@10 -- # set +x 00:07:19.588 05:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.588 05:05:16 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:19.588 05:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.588 05:05:16 -- common/autotest_common.sh@10 -- # set +x 00:07:19.588 05:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.588 05:05:16 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:19.588 05:05:16 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:19.588 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.810 Initializing NVMe Controllers 00:07:31.810 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.810 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:31.810 Initialization complete. Launching workers. 00:07:31.810 ======================================================== 00:07:31.810 Latency(us) 00:07:31.810 Device Information : IOPS MiB/s Average min max 00:07:31.810 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25884.43 101.11 2472.51 480.39 15828.53 00:07:31.810 ======================================================== 00:07:31.810 Total : 25884.43 101.11 2472.51 480.39 15828.53 00:07:31.810 00:07:31.810 05:05:27 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:31.810 05:05:27 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:31.810 05:05:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:31.810 05:05:27 -- nvmf/common.sh@116 -- # sync 00:07:31.810 05:05:27 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:07:31.810 05:05:27 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:07:31.810 05:05:27 -- nvmf/common.sh@119 -- # set +e 00:07:31.810 05:05:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:31.810 05:05:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:07:31.810 rmmod nvme_rdma 00:07:31.810 rmmod nvme_fabrics 00:07:31.810 05:05:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:31.810 05:05:27 -- nvmf/common.sh@123 -- # set -e 00:07:31.810 05:05:27 -- nvmf/common.sh@124 -- # return 0 00:07:31.810 05:05:27 -- nvmf/common.sh@477 -- # '[' -n 132306 ']' 00:07:31.810 05:05:27 -- nvmf/common.sh@478 -- # killprocess 132306 00:07:31.810 05:05:27 -- common/autotest_common.sh@936 -- # '[' -z 132306 ']' 00:07:31.810 05:05:27 -- common/autotest_common.sh@940 -- # kill -0 132306 00:07:31.810 05:05:27 -- common/autotest_common.sh@941 -- # uname 00:07:31.810 05:05:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:31.810 05:05:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132306 00:07:31.810 05:05:27 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:31.810 05:05:27 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:31.810 05:05:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132306' 00:07:31.810 killing process with pid 132306 00:07:31.810 05:05:27 -- common/autotest_common.sh@955 -- # kill 132306 00:07:31.810 05:05:27 -- common/autotest_common.sh@960 -- # wait 132306 00:07:31.810 nvmf threads initialize successfully 00:07:31.810 bdev subsystem init successfully 00:07:31.810 created a nvmf target service 00:07:31.810 create targets's poll groups done 00:07:31.810 all subsystems of target started 00:07:31.810 nvmf target is running 00:07:31.810 all subsystems of target stopped 00:07:31.810 destroy targets's poll groups done 00:07:31.810 destroyed the nvmf target service 00:07:31.810 bdev subsystem finish successfully 00:07:31.810 nvmf threads destroy successfully 00:07:31.810 05:05:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:31.810 05:05:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:07:31.810 05:05:27 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:31.810 05:05:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.810 05:05:27 -- common/autotest_common.sh@10 -- # set +x 00:07:31.810 00:07:31.810 real 0m18.048s 00:07:31.810 user 0m50.952s 00:07:31.810 sys 0m4.331s 00:07:31.810 05:05:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.810 05:05:27 -- common/autotest_common.sh@10 -- # set +x 00:07:31.810 ************************************ 00:07:31.810 END TEST nvmf_example 00:07:31.810 ************************************ 00:07:31.810 05:05:27 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:31.810 05:05:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.810 05:05:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.810 05:05:27 -- common/autotest_common.sh@10 -- # set +x 00:07:31.810 ************************************ 00:07:31.810 START TEST nvmf_filesystem 00:07:31.810 ************************************ 00:07:31.810 05:05:27 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:31.810 * Looking for test storage... 00:07:31.810 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:31.810 05:05:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:31.810 05:05:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:31.810 05:05:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:31.810 05:05:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:31.810 05:05:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:31.810 05:05:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:31.811 05:05:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:31.811 05:05:28 -- scripts/common.sh@335 -- # IFS=.-: 00:07:31.811 05:05:28 -- scripts/common.sh@335 -- # read -ra ver1 00:07:31.811 05:05:28 -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.811 05:05:28 -- scripts/common.sh@336 -- # read -ra ver2 00:07:31.811 05:05:28 -- scripts/common.sh@337 -- # local 'op=<' 00:07:31.811 05:05:28 -- scripts/common.sh@339 -- # ver1_l=2 00:07:31.811 05:05:28 -- scripts/common.sh@340 -- # ver2_l=1 00:07:31.811 05:05:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:31.811 05:05:28 -- scripts/common.sh@343 -- # case "$op" in 00:07:31.811 05:05:28 -- scripts/common.sh@344 -- # : 1 00:07:31.811 05:05:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:31.811 05:05:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.811 05:05:28 -- scripts/common.sh@364 -- # decimal 1 00:07:31.811 05:05:28 -- scripts/common.sh@352 -- # local d=1 00:07:31.811 05:05:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.811 05:05:28 -- scripts/common.sh@354 -- # echo 1 00:07:31.811 05:05:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:31.811 05:05:28 -- scripts/common.sh@365 -- # decimal 2 00:07:31.811 05:05:28 -- scripts/common.sh@352 -- # local d=2 00:07:31.811 05:05:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.811 05:05:28 -- scripts/common.sh@354 -- # echo 2 00:07:31.811 05:05:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:31.811 05:05:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:31.811 05:05:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:31.811 05:05:28 -- scripts/common.sh@367 -- # return 0 00:07:31.811 05:05:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.811 05:05:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:31.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.811 --rc genhtml_branch_coverage=1 00:07:31.811 --rc genhtml_function_coverage=1 00:07:31.811 --rc genhtml_legend=1 00:07:31.811 --rc geninfo_all_blocks=1 00:07:31.811 --rc geninfo_unexecuted_blocks=1 00:07:31.811 00:07:31.811 ' 00:07:31.811 05:05:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:31.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.811 --rc genhtml_branch_coverage=1 00:07:31.811 --rc genhtml_function_coverage=1 00:07:31.811 --rc genhtml_legend=1 00:07:31.811 --rc geninfo_all_blocks=1 00:07:31.811 --rc geninfo_unexecuted_blocks=1 00:07:31.811 00:07:31.811 ' 00:07:31.811 05:05:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:31.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.811 --rc genhtml_branch_coverage=1 00:07:31.811 --rc genhtml_function_coverage=1 00:07:31.811 --rc genhtml_legend=1 00:07:31.811 --rc geninfo_all_blocks=1 00:07:31.811 --rc geninfo_unexecuted_blocks=1 00:07:31.811 00:07:31.811 ' 00:07:31.811 05:05:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:31.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.811 --rc genhtml_branch_coverage=1 00:07:31.811 --rc genhtml_function_coverage=1 00:07:31.811 --rc genhtml_legend=1 00:07:31.811 --rc geninfo_all_blocks=1 00:07:31.811 --rc geninfo_unexecuted_blocks=1 00:07:31.811 00:07:31.811 ' 00:07:31.811 05:05:28 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh 00:07:31.811 05:05:28 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:31.811 05:05:28 -- common/autotest_common.sh@34 -- # set -e 00:07:31.811 05:05:28 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:31.811 05:05:28 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:31.811 05:05:28 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:31.811 05:05:28 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh 00:07:31.811 05:05:28 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:31.811 05:05:28 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:31.811 05:05:28 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:31.811 05:05:28 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:31.811 05:05:28 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:31.811 05:05:28 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:31.811 05:05:28 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:31.811 05:05:28 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:31.811 05:05:28 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:31.811 05:05:28 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:31.811 05:05:28 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:31.811 05:05:28 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:31.811 05:05:28 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:31.811 05:05:28 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:31.811 05:05:28 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:31.811 05:05:28 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:31.811 05:05:28 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:31.811 05:05:28 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:31.811 05:05:28 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:07:31.811 05:05:28 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:31.811 05:05:28 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:31.811 05:05:28 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:31.811 05:05:28 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:31.811 05:05:28 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:31.811 05:05:28 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:31.811 05:05:28 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:31.811 05:05:28 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:31.811 05:05:28 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:31.811 05:05:28 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:31.811 05:05:28 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:31.811 05:05:28 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:31.811 05:05:28 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:31.811 05:05:28 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:31.811 05:05:28 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:31.811 05:05:28 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:31.811 05:05:28 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:07:31.811 05:05:28 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:31.811 05:05:28 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:31.811 05:05:28 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:31.811 05:05:28 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:31.811 05:05:28 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:31.811 05:05:28 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:31.811 05:05:28 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:31.811 05:05:28 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:31.811 05:05:28 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:31.811 05:05:28 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:31.811 05:05:28 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:31.811 05:05:28 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:31.811 05:05:28 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:31.811 05:05:28 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:31.811 05:05:28 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:31.811 05:05:28 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:31.811 05:05:28 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:31.811 05:05:28 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:31.811 05:05:28 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:31.811 05:05:28 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:31.812 05:05:28 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:31.812 05:05:28 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:31.812 05:05:28 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:31.812 05:05:28 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:31.812 05:05:28 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:31.812 05:05:28 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:31.812 05:05:28 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:31.812 05:05:28 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:31.812 05:05:28 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:31.812 05:05:28 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:31.812 05:05:28 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:31.812 05:05:28 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:31.812 05:05:28 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:31.812 05:05:28 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:31.812 05:05:28 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:31.812 05:05:28 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:31.812 05:05:28 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:31.812 05:05:28 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:31.812 05:05:28 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:31.812 05:05:28 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:31.812 05:05:28 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:31.812 05:05:28 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:31.812 05:05:28 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:31.812 05:05:28 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:07:31.812 05:05:28 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:07:31.812 05:05:28 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:07:31.812 05:05:28 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:07:31.812 05:05:28 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:07:31.812 05:05:28 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:07:31.812 05:05:28 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:07:31.812 05:05:28 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:07:31.812 05:05:28 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:31.812 05:05:28 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:31.812 05:05:28 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:31.812 05:05:28 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:31.812 05:05:28 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:31.812 05:05:28 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:31.812 05:05:28 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/config.h ]] 00:07:31.812 05:05:28 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:31.812 #define SPDK_CONFIG_H 00:07:31.812 #define SPDK_CONFIG_APPS 1 00:07:31.812 #define SPDK_CONFIG_ARCH native 00:07:31.812 #undef SPDK_CONFIG_ASAN 00:07:31.812 #undef SPDK_CONFIG_AVAHI 00:07:31.812 #undef SPDK_CONFIG_CET 00:07:31.812 #define SPDK_CONFIG_COVERAGE 1 00:07:31.812 #define SPDK_CONFIG_CROSS_PREFIX 00:07:31.812 #undef SPDK_CONFIG_CRYPTO 00:07:31.812 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:31.812 #undef SPDK_CONFIG_CUSTOMOCF 00:07:31.812 #undef SPDK_CONFIG_DAOS 00:07:31.812 #define SPDK_CONFIG_DAOS_DIR 00:07:31.812 #define SPDK_CONFIG_DEBUG 1 00:07:31.812 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:31.812 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:07:31.812 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:31.812 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:31.812 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:31.812 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:07:31.812 #define SPDK_CONFIG_EXAMPLES 1 00:07:31.812 #undef SPDK_CONFIG_FC 00:07:31.812 #define SPDK_CONFIG_FC_PATH 00:07:31.812 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:31.812 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:31.812 #undef SPDK_CONFIG_FUSE 00:07:31.812 #undef SPDK_CONFIG_FUZZER 00:07:31.812 #define SPDK_CONFIG_FUZZER_LIB 00:07:31.812 #undef SPDK_CONFIG_GOLANG 00:07:31.812 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:31.812 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:31.812 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:31.812 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:31.812 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:31.812 #define SPDK_CONFIG_IDXD 1 00:07:31.812 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:31.812 #undef SPDK_CONFIG_IPSEC_MB 00:07:31.812 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:31.812 #define SPDK_CONFIG_ISAL 1 00:07:31.812 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:31.812 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:31.812 #define SPDK_CONFIG_LIBDIR 00:07:31.812 #undef SPDK_CONFIG_LTO 00:07:31.812 #define SPDK_CONFIG_MAX_LCORES 00:07:31.812 #define SPDK_CONFIG_NVME_CUSE 1 00:07:31.812 #undef SPDK_CONFIG_OCF 00:07:31.812 #define SPDK_CONFIG_OCF_PATH 00:07:31.812 #define SPDK_CONFIG_OPENSSL_PATH 00:07:31.812 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:31.812 #undef SPDK_CONFIG_PGO_USE 00:07:31.812 #define SPDK_CONFIG_PREFIX /usr/local 00:07:31.812 #undef SPDK_CONFIG_RAID5F 00:07:31.812 #undef SPDK_CONFIG_RBD 00:07:31.812 #define SPDK_CONFIG_RDMA 1 00:07:31.812 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:31.812 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:31.812 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:31.812 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:31.812 #define SPDK_CONFIG_SHARED 1 00:07:31.812 #undef SPDK_CONFIG_SMA 00:07:31.812 #define SPDK_CONFIG_TESTS 1 00:07:31.812 #undef SPDK_CONFIG_TSAN 00:07:31.812 #define SPDK_CONFIG_UBLK 1 00:07:31.812 #define SPDK_CONFIG_UBSAN 1 00:07:31.812 #undef SPDK_CONFIG_UNIT_TESTS 00:07:31.812 #undef SPDK_CONFIG_URING 00:07:31.812 #define SPDK_CONFIG_URING_PATH 00:07:31.812 #undef SPDK_CONFIG_URING_ZNS 00:07:31.812 #undef SPDK_CONFIG_USDT 00:07:31.812 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:31.812 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:31.812 #undef SPDK_CONFIG_VFIO_USER 00:07:31.812 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:31.812 #define SPDK_CONFIG_VHOST 1 00:07:31.812 #define SPDK_CONFIG_VIRTIO 1 00:07:31.812 #undef SPDK_CONFIG_VTUNE 00:07:31.812 #define SPDK_CONFIG_VTUNE_DIR 00:07:31.812 #define SPDK_CONFIG_WERROR 1 00:07:31.812 #define SPDK_CONFIG_WPDK_DIR 00:07:31.812 #undef SPDK_CONFIG_XNVME 00:07:31.812 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:31.812 05:05:28 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:31.812 05:05:28 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:31.812 05:05:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.812 05:05:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.812 05:05:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.812 05:05:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.813 05:05:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.813 05:05:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.813 05:05:28 -- paths/export.sh@5 -- # export PATH 00:07:31.813 05:05:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.813 05:05:28 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:07:31.813 05:05:28 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:07:31.813 05:05:28 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:07:31.813 05:05:28 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:07:31.813 05:05:28 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:31.813 05:05:28 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:07:31.813 05:05:28 -- pm/common@16 -- # TEST_TAG=N/A 00:07:31.813 05:05:28 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.run_test_name 00:07:31.813 05:05:28 -- common/autotest_common.sh@52 -- # : 1 00:07:31.813 05:05:28 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:31.813 05:05:28 -- common/autotest_common.sh@56 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:31.813 05:05:28 -- common/autotest_common.sh@58 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:31.813 05:05:28 -- common/autotest_common.sh@60 -- # : 1 00:07:31.813 05:05:28 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:31.813 05:05:28 -- common/autotest_common.sh@62 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:31.813 05:05:28 -- common/autotest_common.sh@64 -- # : 00:07:31.813 05:05:28 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:31.813 05:05:28 -- common/autotest_common.sh@66 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:31.813 05:05:28 -- common/autotest_common.sh@68 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:31.813 05:05:28 -- common/autotest_common.sh@70 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:31.813 05:05:28 -- common/autotest_common.sh@72 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:31.813 05:05:28 -- common/autotest_common.sh@74 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:31.813 05:05:28 -- common/autotest_common.sh@76 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:31.813 05:05:28 -- common/autotest_common.sh@78 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:31.813 05:05:28 -- common/autotest_common.sh@80 -- # : 1 00:07:31.813 05:05:28 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:31.813 05:05:28 -- common/autotest_common.sh@82 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:31.813 05:05:28 -- common/autotest_common.sh@84 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:31.813 05:05:28 -- common/autotest_common.sh@86 -- # : 1 00:07:31.813 05:05:28 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:31.813 05:05:28 -- common/autotest_common.sh@88 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:31.813 05:05:28 -- common/autotest_common.sh@90 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:31.813 05:05:28 -- common/autotest_common.sh@92 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:31.813 05:05:28 -- common/autotest_common.sh@94 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:31.813 05:05:28 -- common/autotest_common.sh@96 -- # : rdma 00:07:31.813 05:05:28 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:31.813 05:05:28 -- common/autotest_common.sh@98 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:31.813 05:05:28 -- common/autotest_common.sh@100 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:31.813 05:05:28 -- common/autotest_common.sh@102 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:31.813 05:05:28 -- common/autotest_common.sh@104 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:31.813 05:05:28 -- common/autotest_common.sh@106 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:31.813 05:05:28 -- common/autotest_common.sh@108 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:31.813 05:05:28 -- common/autotest_common.sh@110 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:31.813 05:05:28 -- common/autotest_common.sh@112 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:31.813 05:05:28 -- common/autotest_common.sh@114 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:31.813 05:05:28 -- common/autotest_common.sh@116 -- # : 1 00:07:31.813 05:05:28 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:31.813 05:05:28 -- common/autotest_common.sh@118 -- # : 00:07:31.813 05:05:28 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:31.813 05:05:28 -- common/autotest_common.sh@120 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:31.813 05:05:28 -- common/autotest_common.sh@122 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:31.813 05:05:28 -- common/autotest_common.sh@124 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:31.813 05:05:28 -- common/autotest_common.sh@126 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:31.813 05:05:28 -- common/autotest_common.sh@128 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:31.813 05:05:28 -- common/autotest_common.sh@130 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:31.813 05:05:28 -- common/autotest_common.sh@132 -- # : 00:07:31.813 05:05:28 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:31.813 05:05:28 -- common/autotest_common.sh@134 -- # : true 00:07:31.813 05:05:28 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:31.813 05:05:28 -- common/autotest_common.sh@136 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:31.813 05:05:28 -- common/autotest_common.sh@138 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:31.813 05:05:28 -- common/autotest_common.sh@140 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:31.813 05:05:28 -- common/autotest_common.sh@142 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:31.813 05:05:28 -- common/autotest_common.sh@144 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:31.813 05:05:28 -- common/autotest_common.sh@146 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:31.813 05:05:28 -- common/autotest_common.sh@148 -- # : e810 00:07:31.813 05:05:28 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:31.813 05:05:28 -- common/autotest_common.sh@150 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:31.813 05:05:28 -- common/autotest_common.sh@152 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:31.813 05:05:28 -- common/autotest_common.sh@154 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:31.813 05:05:28 -- common/autotest_common.sh@156 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:31.813 05:05:28 -- common/autotest_common.sh@158 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:31.813 05:05:28 -- common/autotest_common.sh@160 -- # : 0 00:07:31.813 05:05:28 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:31.814 05:05:28 -- common/autotest_common.sh@163 -- # : 00:07:31.814 05:05:28 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:31.814 05:05:28 -- common/autotest_common.sh@165 -- # : 0 00:07:31.814 05:05:28 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:31.814 05:05:28 -- common/autotest_common.sh@167 -- # : 0 00:07:31.814 05:05:28 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:31.814 05:05:28 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:07:31.814 05:05:28 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:07:31.814 05:05:28 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:07:31.814 05:05:28 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:07:31.814 05:05:28 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:31.814 05:05:28 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:31.814 05:05:28 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:31.814 05:05:28 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:31.814 05:05:28 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:31.814 05:05:28 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:31.814 05:05:28 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:07:31.814 05:05:28 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:07:31.814 05:05:28 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:31.814 05:05:28 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:31.814 05:05:28 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:31.814 05:05:28 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:31.814 05:05:28 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:31.814 05:05:28 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:31.814 05:05:28 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:31.814 05:05:28 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:31.814 05:05:28 -- common/autotest_common.sh@196 -- # cat 00:07:31.814 05:05:28 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:31.814 05:05:28 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:31.814 05:05:28 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:31.814 05:05:28 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:31.814 05:05:28 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:31.814 05:05:28 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:31.814 05:05:28 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:31.814 05:05:28 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:07:31.814 05:05:28 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:07:31.814 05:05:28 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:07:31.814 05:05:28 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:07:31.814 05:05:28 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:31.814 05:05:28 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:31.814 05:05:28 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:31.814 05:05:28 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:31.814 05:05:28 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:31.814 05:05:28 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:31.814 05:05:28 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:31.814 05:05:28 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:31.814 05:05:28 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:31.814 05:05:28 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:31.814 05:05:28 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:31.814 05:05:28 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:31.814 05:05:28 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:31.814 05:05:28 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:31.814 05:05:28 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:31.814 05:05:28 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:31.814 05:05:28 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:31.814 05:05:28 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:31.814 05:05:28 -- common/autotest_common.sh@259 -- # valgrind= 00:07:31.814 05:05:28 -- common/autotest_common.sh@265 -- # uname -s 00:07:31.814 05:05:28 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:31.814 05:05:28 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:31.814 05:05:28 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:31.814 05:05:28 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:31.814 05:05:28 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:31.814 05:05:28 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:31.814 05:05:28 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:31.814 05:05:28 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j96 00:07:31.814 05:05:28 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:31.814 05:05:28 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:31.814 05:05:28 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output ']' 00:07:31.814 05:05:28 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:31.814 05:05:28 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:31.814 05:05:28 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:31.814 05:05:28 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:31.814 05:05:28 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:07:31.814 05:05:28 -- common/autotest_common.sh@319 -- # [[ -z 134490 ]] 00:07:31.814 05:05:28 -- common/autotest_common.sh@319 -- # kill -0 134490 00:07:31.814 05:05:28 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:31.814 05:05:28 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:31.814 05:05:28 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:31.814 05:05:28 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:31.814 05:05:28 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:31.814 05:05:28 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:31.814 05:05:28 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:31.814 05:05:28 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:31.814 05:05:28 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.MGWJW0 00:07:31.815 05:05:28 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:31.815 05:05:28 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:31.815 05:05:28 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:31.815 05:05:28 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target /tmp/spdk.MGWJW0/tests/target /tmp/spdk.MGWJW0 00:07:31.815 05:05:28 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:31.815 05:05:28 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.815 05:05:28 -- common/autotest_common.sh@328 -- # df -T 00:07:31.815 05:05:28 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:07:31.815 05:05:28 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:31.815 05:05:28 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:07:31.815 05:05:28 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:07:31.815 05:05:28 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # avails["$mount"]=89951748096 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # sizes["$mount"]=95552401408 00:07:31.815 05:05:28 -- common/autotest_common.sh@364 -- # uses["$mount"]=5600653312 00:07:31.815 05:05:28 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # avails["$mount"]=47774941184 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # sizes["$mount"]=47776198656 00:07:31.815 05:05:28 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:31.815 05:05:28 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # avails["$mount"]=19101069312 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # sizes["$mount"]=19110481920 00:07:31.815 05:05:28 -- common/autotest_common.sh@364 -- # uses["$mount"]=9412608 00:07:31.815 05:05:28 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # avails["$mount"]=47775899648 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # sizes["$mount"]=47776202752 00:07:31.815 05:05:28 -- common/autotest_common.sh@364 -- # uses["$mount"]=303104 00:07:31.815 05:05:28 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # avails["$mount"]=9555226624 00:07:31.815 05:05:28 -- common/autotest_common.sh@363 -- # sizes["$mount"]=9555238912 00:07:31.815 05:05:28 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:31.815 05:05:28 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:31.815 05:05:28 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:31.815 * Looking for test storage... 00:07:31.815 05:05:28 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:31.815 05:05:28 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:31.815 05:05:28 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:31.815 05:05:28 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:31.815 05:05:28 -- common/autotest_common.sh@373 -- # mount=/ 00:07:31.815 05:05:28 -- common/autotest_common.sh@375 -- # target_space=89951748096 00:07:31.815 05:05:28 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:31.815 05:05:28 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:31.815 05:05:28 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:07:31.815 05:05:28 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:07:31.815 05:05:28 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:07:31.815 05:05:28 -- common/autotest_common.sh@382 -- # new_size=7815245824 00:07:31.815 05:05:28 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:31.815 05:05:28 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:31.815 05:05:28 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:31.815 05:05:28 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:31.815 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:31.815 05:05:28 -- common/autotest_common.sh@390 -- # return 0 00:07:31.815 05:05:28 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:31.815 05:05:28 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:31.815 05:05:28 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:31.815 05:05:28 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:31.815 05:05:28 -- common/autotest_common.sh@1682 -- # true 00:07:31.815 05:05:28 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:31.815 05:05:28 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:31.815 05:05:28 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:31.815 05:05:28 -- common/autotest_common.sh@27 -- # exec 00:07:31.815 05:05:28 -- common/autotest_common.sh@29 -- # exec 00:07:31.815 05:05:28 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:31.815 05:05:28 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:31.815 05:05:28 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:31.815 05:05:28 -- common/autotest_common.sh@18 -- # set -x 00:07:31.815 05:05:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:31.815 05:05:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:31.815 05:05:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:31.815 05:05:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:31.815 05:05:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:31.815 05:05:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:31.815 05:05:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:31.815 05:05:28 -- scripts/common.sh@335 -- # IFS=.-: 00:07:31.815 05:05:28 -- scripts/common.sh@335 -- # read -ra ver1 00:07:31.815 05:05:28 -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.815 05:05:28 -- scripts/common.sh@336 -- # read -ra ver2 00:07:31.815 05:05:28 -- scripts/common.sh@337 -- # local 'op=<' 00:07:31.815 05:05:28 -- scripts/common.sh@339 -- # ver1_l=2 00:07:31.815 05:05:28 -- scripts/common.sh@340 -- # ver2_l=1 00:07:31.815 05:05:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:31.815 05:05:28 -- scripts/common.sh@343 -- # case "$op" in 00:07:31.815 05:05:28 -- scripts/common.sh@344 -- # : 1 00:07:31.815 05:05:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:31.815 05:05:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.815 05:05:28 -- scripts/common.sh@364 -- # decimal 1 00:07:31.815 05:05:28 -- scripts/common.sh@352 -- # local d=1 00:07:31.815 05:05:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.815 05:05:28 -- scripts/common.sh@354 -- # echo 1 00:07:31.815 05:05:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:31.815 05:05:28 -- scripts/common.sh@365 -- # decimal 2 00:07:31.815 05:05:28 -- scripts/common.sh@352 -- # local d=2 00:07:31.815 05:05:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.815 05:05:28 -- scripts/common.sh@354 -- # echo 2 00:07:31.815 05:05:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:31.815 05:05:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:31.815 05:05:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:31.815 05:05:28 -- scripts/common.sh@367 -- # return 0 00:07:31.815 05:05:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.815 05:05:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:31.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.816 --rc genhtml_branch_coverage=1 00:07:31.816 --rc genhtml_function_coverage=1 00:07:31.816 --rc genhtml_legend=1 00:07:31.816 --rc geninfo_all_blocks=1 00:07:31.816 --rc geninfo_unexecuted_blocks=1 00:07:31.816 00:07:31.816 ' 00:07:31.816 05:05:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:31.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.816 --rc genhtml_branch_coverage=1 00:07:31.816 --rc genhtml_function_coverage=1 00:07:31.816 --rc genhtml_legend=1 00:07:31.816 --rc geninfo_all_blocks=1 00:07:31.816 --rc geninfo_unexecuted_blocks=1 00:07:31.816 00:07:31.816 ' 00:07:31.816 05:05:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:31.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.816 --rc genhtml_branch_coverage=1 00:07:31.816 --rc genhtml_function_coverage=1 00:07:31.816 --rc genhtml_legend=1 00:07:31.816 --rc geninfo_all_blocks=1 00:07:31.816 --rc geninfo_unexecuted_blocks=1 00:07:31.816 00:07:31.816 ' 00:07:31.816 05:05:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:31.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.816 --rc genhtml_branch_coverage=1 00:07:31.816 --rc genhtml_function_coverage=1 00:07:31.816 --rc genhtml_legend=1 00:07:31.816 --rc geninfo_all_blocks=1 00:07:31.816 --rc geninfo_unexecuted_blocks=1 00:07:31.816 00:07:31.816 ' 00:07:31.816 05:05:28 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.816 05:05:28 -- nvmf/common.sh@7 -- # uname -s 00:07:31.816 05:05:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.816 05:05:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.816 05:05:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.816 05:05:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.816 05:05:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.816 05:05:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.816 05:05:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.816 05:05:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.816 05:05:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.816 05:05:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.816 05:05:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:31.816 05:05:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:31.816 05:05:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.816 05:05:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.816 05:05:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:31.816 05:05:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:31.816 05:05:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.816 05:05:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.816 05:05:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.816 05:05:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.816 05:05:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.816 05:05:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.816 05:05:28 -- paths/export.sh@5 -- # export PATH 00:07:31.816 05:05:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.816 05:05:28 -- nvmf/common.sh@46 -- # : 0 00:07:31.816 05:05:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:31.816 05:05:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:31.816 05:05:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:31.816 05:05:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.816 05:05:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.816 05:05:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:31.816 05:05:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:31.816 05:05:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:31.816 05:05:28 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:31.816 05:05:28 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:31.816 05:05:28 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:31.816 05:05:28 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:31.816 05:05:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.816 05:05:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:31.816 05:05:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:31.816 05:05:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:31.816 05:05:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.816 05:05:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.816 05:05:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.816 05:05:28 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:07:31.816 05:05:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:31.816 05:05:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:31.816 05:05:28 -- common/autotest_common.sh@10 -- # set +x 00:07:37.098 05:05:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:37.098 05:05:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:37.099 05:05:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:37.099 05:05:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:37.099 05:05:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:37.099 05:05:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:37.099 05:05:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:37.099 05:05:33 -- nvmf/common.sh@294 -- # net_devs=() 00:07:37.099 05:05:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:37.099 05:05:33 -- nvmf/common.sh@295 -- # e810=() 00:07:37.099 05:05:33 -- nvmf/common.sh@295 -- # local -ga e810 00:07:37.099 05:05:33 -- nvmf/common.sh@296 -- # x722=() 00:07:37.099 05:05:33 -- nvmf/common.sh@296 -- # local -ga x722 00:07:37.099 05:05:33 -- nvmf/common.sh@297 -- # mlx=() 00:07:37.099 05:05:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:37.099 05:05:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.099 05:05:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:37.099 05:05:33 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:37.099 05:05:33 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:37.099 05:05:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:37.099 05:05:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:37.099 05:05:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:37.099 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:37.099 05:05:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:37.099 05:05:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:37.099 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:37.099 05:05:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:37.099 05:05:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:37.099 05:05:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:07:37.099 05:05:33 -- nvmf/common.sh@376 -- # modinfo irdma 00:07:37.099 05:05:33 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:07:37.099 05:05:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.099 05:05:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:37.099 05:05:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.099 05:05:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:37.099 Found net devices under 0000:af:00.0: cvl_0_0 00:07:37.099 05:05:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.099 05:05:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.099 05:05:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:37.099 05:05:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.099 05:05:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:37.099 Found net devices under 0000:af:00.1: cvl_0_1 00:07:37.099 05:05:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.099 05:05:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:37.099 05:05:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:37.099 05:05:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:37.099 05:05:33 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:37.099 05:05:33 -- nvmf/common.sh@57 -- # uname 00:07:37.099 05:05:33 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:37.099 05:05:33 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:37.099 05:05:33 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:37.099 05:05:33 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:37.099 05:05:33 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:37.099 05:05:33 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:37.099 05:05:33 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:37.099 05:05:33 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:37.099 05:05:33 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:37.099 05:05:33 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:37.099 05:05:33 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:37.099 05:05:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:37.099 05:05:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:37.099 05:05:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:37.099 05:05:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:37.099 05:05:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:37.099 05:05:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:07:37.099 05:05:33 -- nvmf/common.sh@104 -- # continue 2 00:07:37.099 05:05:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:07:37.099 05:05:33 -- nvmf/common.sh@104 -- # continue 2 00:07:37.099 05:05:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:37.099 05:05:33 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:07:37.099 05:05:33 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:07:37.099 05:05:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:07:37.099 05:05:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:37.099 05:05:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:37.099 05:05:33 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:37.099 05:05:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:07:37.099 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:37.099 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:37.099 altname enp175s0f0np0 00:07:37.099 altname ens801f0np0 00:07:37.099 inet 192.168.100.8/24 scope global cvl_0_0 00:07:37.099 valid_lft forever preferred_lft forever 00:07:37.099 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:37.099 valid_lft forever preferred_lft forever 00:07:37.099 05:05:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:37.099 05:05:33 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:07:37.099 05:05:33 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:07:37.099 05:05:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:07:37.099 05:05:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:37.099 05:05:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:37.099 05:05:33 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:37.099 05:05:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:07:37.099 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:37.099 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:37.099 altname enp175s0f1np1 00:07:37.099 altname ens801f1np1 00:07:37.099 inet 192.168.100.9/24 scope global cvl_0_1 00:07:37.099 valid_lft forever preferred_lft forever 00:07:37.099 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:37.099 valid_lft forever preferred_lft forever 00:07:37.099 05:05:33 -- nvmf/common.sh@410 -- # return 0 00:07:37.099 05:05:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:37.099 05:05:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:37.099 05:05:33 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:37.099 05:05:33 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:37.099 05:05:33 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:37.099 05:05:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:37.099 05:05:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:37.099 05:05:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:37.099 05:05:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:37.099 05:05:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:37.099 05:05:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:37.099 05:05:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.100 05:05:33 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:37.100 05:05:33 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:07:37.100 05:05:33 -- nvmf/common.sh@104 -- # continue 2 00:07:37.100 05:05:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:37.100 05:05:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.100 05:05:33 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:37.100 05:05:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.100 05:05:33 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:37.100 05:05:33 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:07:37.100 05:05:33 -- nvmf/common.sh@104 -- # continue 2 00:07:37.100 05:05:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:37.100 05:05:33 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:07:37.100 05:05:33 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:07:37.100 05:05:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:07:37.100 05:05:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:37.100 05:05:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:37.100 05:05:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:37.100 05:05:33 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:07:37.100 05:05:33 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:07:37.100 05:05:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:07:37.100 05:05:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:37.100 05:05:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:37.100 05:05:33 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:37.100 192.168.100.9' 00:07:37.100 05:05:33 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:37.100 192.168.100.9' 00:07:37.100 05:05:33 -- nvmf/common.sh@445 -- # head -n 1 00:07:37.100 05:05:33 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:37.100 05:05:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:37.100 192.168.100.9' 00:07:37.100 05:05:33 -- nvmf/common.sh@446 -- # tail -n +2 00:07:37.100 05:05:33 -- nvmf/common.sh@446 -- # head -n 1 00:07:37.100 05:05:33 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:37.100 05:05:33 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:37.100 05:05:33 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:37.100 05:05:33 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:37.100 05:05:33 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:37.100 05:05:33 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:37.100 05:05:33 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:37.100 05:05:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:37.100 05:05:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.100 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:07:37.100 ************************************ 00:07:37.100 START TEST nvmf_filesystem_no_in_capsule 00:07:37.100 ************************************ 00:07:37.100 05:05:33 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:37.100 05:05:33 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:37.100 05:05:33 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:37.100 05:05:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:37.100 05:05:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.100 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:07:37.100 05:05:33 -- nvmf/common.sh@469 -- # nvmfpid=137539 00:07:37.100 05:05:33 -- nvmf/common.sh@470 -- # waitforlisten 137539 00:07:37.100 05:05:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:37.100 05:05:33 -- common/autotest_common.sh@829 -- # '[' -z 137539 ']' 00:07:37.100 05:05:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.100 05:05:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.100 05:05:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.100 05:05:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.100 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:07:37.100 [2024-11-20 05:05:33.863993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.100 [2024-11-20 05:05:33.864032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.100 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.100 [2024-11-20 05:05:33.920267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.360 [2024-11-20 05:05:33.990731] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.360 [2024-11-20 05:05:33.990858] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.360 [2024-11-20 05:05:33.990866] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.360 [2024-11-20 05:05:33.990872] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.360 [2024-11-20 05:05:33.990921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.360 [2024-11-20 05:05:33.991022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.360 [2024-11-20 05:05:33.991114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.360 [2024-11-20 05:05:33.991116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.929 05:05:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.929 05:05:34 -- common/autotest_common.sh@862 -- # return 0 00:07:37.929 05:05:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:37.929 05:05:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.929 05:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:37.929 05:05:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.929 05:05:34 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:37.929 05:05:34 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:37.929 05:05:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.929 05:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:37.929 [2024-11-20 05:05:34.724379] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:37.929 [2024-11-20 05:05:34.738137] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1d27100/0x1d26740) succeed. 00:07:37.929 [2024-11-20 05:05:34.747051] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1d28470/0x1d26cc0) succeed. 00:07:37.929 [2024-11-20 05:05:34.747072] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:37.929 05:05:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.929 05:05:34 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:37.929 05:05:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.929 05:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.189 Malloc1 00:07:38.189 05:05:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.189 05:05:34 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:38.189 05:05:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.189 05:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.189 05:05:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.189 05:05:34 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.189 05:05:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.189 05:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.189 05:05:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.189 05:05:34 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:38.189 05:05:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.189 05:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.189 [2024-11-20 05:05:34.898249] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:38.189 05:05:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.189 05:05:34 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:38.189 05:05:34 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:38.189 05:05:34 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:38.189 05:05:34 -- common/autotest_common.sh@1369 -- # local bs 00:07:38.190 05:05:34 -- common/autotest_common.sh@1370 -- # local nb 00:07:38.190 05:05:34 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:38.190 05:05:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.190 05:05:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.190 05:05:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.190 05:05:34 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:38.190 { 00:07:38.190 "name": "Malloc1", 00:07:38.190 "aliases": [ 00:07:38.190 "98ff534c-041c-4f75-af72-95eb28361cbc" 00:07:38.190 ], 00:07:38.190 "product_name": "Malloc disk", 00:07:38.190 "block_size": 512, 00:07:38.190 "num_blocks": 1048576, 00:07:38.190 "uuid": "98ff534c-041c-4f75-af72-95eb28361cbc", 00:07:38.190 "assigned_rate_limits": { 00:07:38.190 "rw_ios_per_sec": 0, 00:07:38.190 "rw_mbytes_per_sec": 0, 00:07:38.190 "r_mbytes_per_sec": 0, 00:07:38.190 "w_mbytes_per_sec": 0 00:07:38.190 }, 00:07:38.190 "claimed": true, 00:07:38.190 "claim_type": "exclusive_write", 00:07:38.190 "zoned": false, 00:07:38.190 "supported_io_types": { 00:07:38.190 "read": true, 00:07:38.190 "write": true, 00:07:38.190 "unmap": true, 00:07:38.190 "write_zeroes": true, 00:07:38.190 "flush": true, 00:07:38.190 "reset": true, 00:07:38.190 "compare": false, 00:07:38.190 "compare_and_write": false, 00:07:38.190 "abort": true, 00:07:38.190 "nvme_admin": false, 00:07:38.190 "nvme_io": false 00:07:38.190 }, 00:07:38.190 "memory_domains": [ 00:07:38.190 { 00:07:38.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.190 "dma_device_type": 2 00:07:38.190 } 00:07:38.190 ], 00:07:38.190 "driver_specific": {} 00:07:38.190 } 00:07:38.190 ]' 00:07:38.190 05:05:34 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:38.190 05:05:34 -- common/autotest_common.sh@1372 -- # bs=512 00:07:38.190 05:05:34 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:38.190 05:05:35 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:38.190 05:05:35 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:38.190 05:05:35 -- common/autotest_common.sh@1377 -- # echo 512 00:07:38.190 05:05:35 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:38.190 05:05:35 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:38.450 05:05:35 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.450 05:05:35 -- common/autotest_common.sh@1187 -- # local i=0 00:07:38.450 05:05:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.450 05:05:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:38.450 05:05:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:40.989 05:05:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:40.989 05:05:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:40.989 05:05:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.989 05:05:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:40.989 05:05:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.989 05:05:37 -- common/autotest_common.sh@1197 -- # return 0 00:07:40.989 05:05:37 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:40.989 05:05:37 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:40.989 05:05:37 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:40.989 05:05:37 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:40.989 05:05:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:40.989 05:05:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:40.989 05:05:37 -- setup/common.sh@80 -- # echo 536870912 00:07:40.989 05:05:37 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:40.989 05:05:37 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:40.989 05:05:37 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:40.989 05:05:37 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:40.989 05:05:37 -- target/filesystem.sh@69 -- # partprobe 00:07:40.989 05:05:37 -- target/filesystem.sh@70 -- # sleep 1 00:07:41.929 05:05:38 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:41.929 05:05:38 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:41.929 05:05:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:41.929 05:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.929 05:05:38 -- common/autotest_common.sh@10 -- # set +x 00:07:41.929 ************************************ 00:07:41.929 START TEST filesystem_ext4 00:07:41.929 ************************************ 00:07:41.929 05:05:38 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:41.929 05:05:38 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:41.929 05:05:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.929 05:05:38 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:41.929 05:05:38 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:41.929 05:05:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:41.929 05:05:38 -- common/autotest_common.sh@914 -- # local i=0 00:07:41.929 05:05:38 -- common/autotest_common.sh@915 -- # local force 00:07:41.929 05:05:38 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:41.929 05:05:38 -- common/autotest_common.sh@918 -- # force=-F 00:07:41.930 05:05:38 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:41.930 mke2fs 1.47.0 (5-Feb-2023) 00:07:41.930 Discarding device blocks: 0/522240 done 00:07:41.930 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:41.930 Filesystem UUID: a2c2134f-cf48-4d2e-a989-0ff523d84132 00:07:41.930 Superblock backups stored on blocks: 00:07:41.930 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:41.930 00:07:41.930 Allocating group tables: 0/64 done 00:07:41.930 Writing inode tables: 0/64 done 00:07:41.930 Creating journal (8192 blocks): done 00:07:41.930 Writing superblocks and filesystem accounting information: 0/64 done 00:07:41.930 00:07:41.930 05:05:38 -- common/autotest_common.sh@931 -- # return 0 00:07:41.930 05:05:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.930 05:05:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.930 05:05:38 -- target/filesystem.sh@25 -- # sync 00:07:41.930 05:05:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.930 05:05:38 -- target/filesystem.sh@27 -- # sync 00:07:41.930 05:05:38 -- target/filesystem.sh@29 -- # i=0 00:07:41.930 05:05:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.930 05:05:38 -- target/filesystem.sh@37 -- # kill -0 137539 00:07:41.930 05:05:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.930 05:05:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.930 05:05:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.930 05:05:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.930 00:07:41.930 real 0m0.192s 00:07:41.930 user 0m0.027s 00:07:41.930 sys 0m0.062s 00:07:41.930 05:05:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.930 05:05:38 -- common/autotest_common.sh@10 -- # set +x 00:07:41.930 ************************************ 00:07:41.930 END TEST filesystem_ext4 00:07:41.930 ************************************ 00:07:41.930 05:05:38 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:41.930 05:05:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:41.930 05:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.930 05:05:38 -- common/autotest_common.sh@10 -- # set +x 00:07:41.930 ************************************ 00:07:41.930 START TEST filesystem_btrfs 00:07:41.930 ************************************ 00:07:41.930 05:05:38 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.930 05:05:38 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.930 05:05:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.930 05:05:38 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.930 05:05:38 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:41.930 05:05:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:41.930 05:05:38 -- common/autotest_common.sh@914 -- # local i=0 00:07:41.930 05:05:38 -- common/autotest_common.sh@915 -- # local force 00:07:41.930 05:05:38 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:41.930 05:05:38 -- common/autotest_common.sh@920 -- # force=-f 00:07:41.930 05:05:38 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:42.190 btrfs-progs v6.8.1 00:07:42.190 See https://btrfs.readthedocs.io for more information. 00:07:42.190 00:07:42.190 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:42.190 NOTE: several default settings have changed in version 5.15, please make sure 00:07:42.190 this does not affect your deployments: 00:07:42.190 - DUP for metadata (-m dup) 00:07:42.190 - enabled no-holes (-O no-holes) 00:07:42.190 - enabled free-space-tree (-R free-space-tree) 00:07:42.190 00:07:42.190 Label: (null) 00:07:42.190 UUID: 6125b114-dbae-4c64-9806-443035505a40 00:07:42.190 Node size: 16384 00:07:42.190 Sector size: 4096 (CPU page size: 4096) 00:07:42.190 Filesystem size: 510.00MiB 00:07:42.190 Block group profiles: 00:07:42.190 Data: single 8.00MiB 00:07:42.190 Metadata: DUP 32.00MiB 00:07:42.190 System: DUP 8.00MiB 00:07:42.190 SSD detected: yes 00:07:42.190 Zoned device: no 00:07:42.190 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:42.190 Checksum: crc32c 00:07:42.190 Number of devices: 1 00:07:42.190 Devices: 00:07:42.190 ID SIZE PATH 00:07:42.190 1 510.00MiB /dev/nvme0n1p1 00:07:42.190 00:07:42.190 05:05:38 -- common/autotest_common.sh@931 -- # return 0 00:07:42.190 05:05:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.190 05:05:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.190 05:05:38 -- target/filesystem.sh@25 -- # sync 00:07:42.190 05:05:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.190 05:05:38 -- target/filesystem.sh@27 -- # sync 00:07:42.190 05:05:38 -- target/filesystem.sh@29 -- # i=0 00:07:42.190 05:05:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.190 05:05:38 -- target/filesystem.sh@37 -- # kill -0 137539 00:07:42.190 05:05:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.190 05:05:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.190 05:05:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.190 05:05:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.190 00:07:42.190 real 0m0.269s 00:07:42.190 user 0m0.012s 00:07:42.190 sys 0m0.163s 00:07:42.190 05:05:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.190 05:05:38 -- common/autotest_common.sh@10 -- # set +x 00:07:42.190 ************************************ 00:07:42.190 END TEST filesystem_btrfs 00:07:42.190 ************************************ 00:07:42.190 05:05:38 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:42.190 05:05:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:42.190 05:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.190 05:05:38 -- common/autotest_common.sh@10 -- # set +x 00:07:42.190 ************************************ 00:07:42.190 START TEST filesystem_xfs 00:07:42.190 ************************************ 00:07:42.190 05:05:38 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:42.190 05:05:38 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:42.190 05:05:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.190 05:05:38 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:42.190 05:05:38 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:42.190 05:05:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:42.190 05:05:38 -- common/autotest_common.sh@914 -- # local i=0 00:07:42.190 05:05:38 -- common/autotest_common.sh@915 -- # local force 00:07:42.190 05:05:38 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:42.190 05:05:38 -- common/autotest_common.sh@920 -- # force=-f 00:07:42.190 05:05:38 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:42.450 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:42.450 = sectsz=512 attr=2, projid32bit=1 00:07:42.450 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:42.450 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:42.450 data = bsize=4096 blocks=130560, imaxpct=25 00:07:42.450 = sunit=0 swidth=0 blks 00:07:42.450 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:42.450 log =internal log bsize=4096 blocks=16384, version=2 00:07:42.450 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:42.450 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:42.450 Discarding blocks...Done. 00:07:42.450 05:05:39 -- common/autotest_common.sh@931 -- # return 0 00:07:42.450 05:05:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.019 05:05:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.019 05:05:39 -- target/filesystem.sh@25 -- # sync 00:07:43.019 05:05:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.019 05:05:39 -- target/filesystem.sh@27 -- # sync 00:07:43.019 05:05:39 -- target/filesystem.sh@29 -- # i=0 00:07:43.019 05:05:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.019 05:05:39 -- target/filesystem.sh@37 -- # kill -0 137539 00:07:43.019 05:05:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.019 05:05:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.019 05:05:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.019 05:05:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.019 00:07:43.019 real 0m0.699s 00:07:43.019 user 0m0.017s 00:07:43.019 sys 0m0.118s 00:07:43.019 05:05:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.019 05:05:39 -- common/autotest_common.sh@10 -- # set +x 00:07:43.019 ************************************ 00:07:43.019 END TEST filesystem_xfs 00:07:43.019 ************************************ 00:07:43.019 05:05:39 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:43.019 05:05:39 -- target/filesystem.sh@93 -- # sync 00:07:43.019 05:05:39 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:43.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.959 05:05:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:43.959 05:05:40 -- common/autotest_common.sh@1208 -- # local i=0 00:07:43.959 05:05:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:43.959 05:05:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.959 05:05:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:43.959 05:05:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.959 05:05:40 -- common/autotest_common.sh@1220 -- # return 0 00:07:43.959 05:05:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.959 05:05:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.959 05:05:40 -- common/autotest_common.sh@10 -- # set +x 00:07:43.959 05:05:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.959 05:05:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:43.959 05:05:40 -- target/filesystem.sh@101 -- # killprocess 137539 00:07:43.959 05:05:40 -- common/autotest_common.sh@936 -- # '[' -z 137539 ']' 00:07:43.959 05:05:40 -- common/autotest_common.sh@940 -- # kill -0 137539 00:07:43.959 05:05:40 -- common/autotest_common.sh@941 -- # uname 00:07:43.959 05:05:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:43.959 05:05:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137539 00:07:43.959 05:05:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:43.959 05:05:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:43.959 05:05:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137539' 00:07:43.959 killing process with pid 137539 00:07:43.959 05:05:40 -- common/autotest_common.sh@955 -- # kill 137539 00:07:43.959 05:05:40 -- common/autotest_common.sh@960 -- # wait 137539 00:07:44.529 05:05:41 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:44.529 00:07:44.529 real 0m7.272s 00:07:44.529 user 0m28.361s 00:07:44.529 sys 0m1.124s 00:07:44.529 05:05:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.529 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:07:44.529 ************************************ 00:07:44.529 END TEST nvmf_filesystem_no_in_capsule 00:07:44.529 ************************************ 00:07:44.529 05:05:41 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:44.529 05:05:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:44.529 05:05:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.529 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:07:44.529 ************************************ 00:07:44.529 START TEST nvmf_filesystem_in_capsule 00:07:44.529 ************************************ 00:07:44.529 05:05:41 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:44.529 05:05:41 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:44.529 05:05:41 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:44.529 05:05:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:44.529 05:05:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.529 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:07:44.529 05:05:41 -- nvmf/common.sh@469 -- # nvmfpid=138908 00:07:44.529 05:05:41 -- nvmf/common.sh@470 -- # waitforlisten 138908 00:07:44.529 05:05:41 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.529 05:05:41 -- common/autotest_common.sh@829 -- # '[' -z 138908 ']' 00:07:44.529 05:05:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.529 05:05:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.529 05:05:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.529 05:05:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.529 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:07:44.529 [2024-11-20 05:05:41.179957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.529 [2024-11-20 05:05:41.180001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.529 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.529 [2024-11-20 05:05:41.237146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.529 [2024-11-20 05:05:41.302705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:44.529 [2024-11-20 05:05:41.302830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.529 [2024-11-20 05:05:41.302837] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.529 [2024-11-20 05:05:41.302843] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.530 [2024-11-20 05:05:41.302888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.530 [2024-11-20 05:05:41.302989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.530 [2024-11-20 05:05:41.303072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.530 [2024-11-20 05:05:41.303073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.468 05:05:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.468 05:05:41 -- common/autotest_common.sh@862 -- # return 0 00:07:45.469 05:05:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:45.469 05:05:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.469 05:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.469 05:05:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.469 05:05:42 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:45.469 05:05:42 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:45.469 05:05:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.469 05:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.469 [2024-11-20 05:05:42.054144] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xb17100/0xb16740) succeed. 00:07:45.469 [2024-11-20 05:05:42.063149] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xb18470/0xb16cc0) succeed. 00:07:45.469 [2024-11-20 05:05:42.063171] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:45.469 05:05:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.469 05:05:42 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:45.469 05:05:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.469 05:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.469 Malloc1 00:07:45.469 05:05:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.469 05:05:42 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:45.469 05:05:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.469 05:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.469 05:05:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.469 05:05:42 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.469 05:05:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.469 05:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.469 05:05:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.469 05:05:42 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:45.469 05:05:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.469 05:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.469 [2024-11-20 05:05:42.218752] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:45.469 05:05:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.469 05:05:42 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:45.469 05:05:42 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:45.469 05:05:42 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:45.469 05:05:42 -- common/autotest_common.sh@1369 -- # local bs 00:07:45.469 05:05:42 -- common/autotest_common.sh@1370 -- # local nb 00:07:45.469 05:05:42 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:45.469 05:05:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.469 05:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.469 05:05:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.469 05:05:42 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:45.469 { 00:07:45.469 "name": "Malloc1", 00:07:45.469 "aliases": [ 00:07:45.469 "7bfa3432-8407-45eb-a6bc-c3d03acd9033" 00:07:45.469 ], 00:07:45.469 "product_name": "Malloc disk", 00:07:45.469 "block_size": 512, 00:07:45.469 "num_blocks": 1048576, 00:07:45.469 "uuid": "7bfa3432-8407-45eb-a6bc-c3d03acd9033", 00:07:45.469 "assigned_rate_limits": { 00:07:45.469 "rw_ios_per_sec": 0, 00:07:45.469 "rw_mbytes_per_sec": 0, 00:07:45.469 "r_mbytes_per_sec": 0, 00:07:45.469 "w_mbytes_per_sec": 0 00:07:45.469 }, 00:07:45.469 "claimed": true, 00:07:45.469 "claim_type": "exclusive_write", 00:07:45.469 "zoned": false, 00:07:45.469 "supported_io_types": { 00:07:45.469 "read": true, 00:07:45.469 "write": true, 00:07:45.469 "unmap": true, 00:07:45.469 "write_zeroes": true, 00:07:45.469 "flush": true, 00:07:45.469 "reset": true, 00:07:45.469 "compare": false, 00:07:45.469 "compare_and_write": false, 00:07:45.469 "abort": true, 00:07:45.469 "nvme_admin": false, 00:07:45.469 "nvme_io": false 00:07:45.469 }, 00:07:45.469 "memory_domains": [ 00:07:45.469 { 00:07:45.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.469 "dma_device_type": 2 00:07:45.469 } 00:07:45.469 ], 00:07:45.469 "driver_specific": {} 00:07:45.469 } 00:07:45.469 ]' 00:07:45.469 05:05:42 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:45.469 05:05:42 -- common/autotest_common.sh@1372 -- # bs=512 00:07:45.469 05:05:42 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:45.728 05:05:42 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:45.728 05:05:42 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:45.728 05:05:42 -- common/autotest_common.sh@1377 -- # echo 512 00:07:45.728 05:05:42 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:45.728 05:05:42 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:45.728 05:05:42 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.728 05:05:42 -- common/autotest_common.sh@1187 -- # local i=0 00:07:45.728 05:05:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.728 05:05:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:45.728 05:05:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:48.267 05:05:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:48.267 05:05:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:48.267 05:05:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.267 05:05:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:48.267 05:05:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.267 05:05:44 -- common/autotest_common.sh@1197 -- # return 0 00:07:48.267 05:05:44 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:48.267 05:05:44 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:48.267 05:05:44 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:48.267 05:05:44 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:48.267 05:05:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:48.267 05:05:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:48.267 05:05:44 -- setup/common.sh@80 -- # echo 536870912 00:07:48.267 05:05:44 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:48.267 05:05:44 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:48.267 05:05:44 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:48.267 05:05:44 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:48.267 05:05:44 -- target/filesystem.sh@69 -- # partprobe 00:07:48.267 05:05:44 -- target/filesystem.sh@70 -- # sleep 1 00:07:49.206 05:05:45 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:49.206 05:05:45 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.206 05:05:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.206 05:05:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.206 05:05:45 -- common/autotest_common.sh@10 -- # set +x 00:07:49.206 ************************************ 00:07:49.206 START TEST filesystem_in_capsule_ext4 00:07:49.207 ************************************ 00:07:49.207 05:05:45 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.207 05:05:45 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.207 05:05:45 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.207 05:05:45 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.207 05:05:45 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:49.207 05:05:45 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:49.207 05:05:45 -- common/autotest_common.sh@914 -- # local i=0 00:07:49.207 05:05:45 -- common/autotest_common.sh@915 -- # local force 00:07:49.207 05:05:45 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:49.207 05:05:45 -- common/autotest_common.sh@918 -- # force=-F 00:07:49.207 05:05:45 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.207 mke2fs 1.47.0 (5-Feb-2023) 00:07:49.207 Discarding device blocks: 0/522240 done 00:07:49.207 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.207 Filesystem UUID: 66021145-6c3a-42cc-8c12-3bb8ddf1aca5 00:07:49.207 Superblock backups stored on blocks: 00:07:49.207 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.207 00:07:49.207 Allocating group tables: 0/64 done 00:07:49.207 Writing inode tables: 0/64 done 00:07:49.207 Creating journal (8192 blocks): done 00:07:49.207 Writing superblocks and filesystem accounting information: 0/64 done 00:07:49.207 00:07:49.207 05:05:45 -- common/autotest_common.sh@931 -- # return 0 00:07:49.207 05:05:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.207 05:05:45 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.207 05:05:45 -- target/filesystem.sh@25 -- # sync 00:07:49.207 05:05:45 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.207 05:05:45 -- target/filesystem.sh@27 -- # sync 00:07:49.207 05:05:45 -- target/filesystem.sh@29 -- # i=0 00:07:49.207 05:05:45 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.207 05:05:45 -- target/filesystem.sh@37 -- # kill -0 138908 00:07:49.207 05:05:45 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.207 05:05:45 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.207 05:05:45 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.207 05:05:45 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.207 00:07:49.207 real 0m0.183s 00:07:49.207 user 0m0.027s 00:07:49.207 sys 0m0.055s 00:07:49.207 05:05:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.207 05:05:45 -- common/autotest_common.sh@10 -- # set +x 00:07:49.207 ************************************ 00:07:49.207 END TEST filesystem_in_capsule_ext4 00:07:49.207 ************************************ 00:07:49.207 05:05:45 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:49.207 05:05:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.207 05:05:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.207 05:05:45 -- common/autotest_common.sh@10 -- # set +x 00:07:49.207 ************************************ 00:07:49.207 START TEST filesystem_in_capsule_btrfs 00:07:49.207 ************************************ 00:07:49.207 05:05:45 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:49.207 05:05:45 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:49.207 05:05:45 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.207 05:05:45 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:49.207 05:05:45 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:49.207 05:05:45 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:49.207 05:05:45 -- common/autotest_common.sh@914 -- # local i=0 00:07:49.207 05:05:45 -- common/autotest_common.sh@915 -- # local force 00:07:49.207 05:05:45 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:49.207 05:05:45 -- common/autotest_common.sh@920 -- # force=-f 00:07:49.207 05:05:45 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:49.467 btrfs-progs v6.8.1 00:07:49.467 See https://btrfs.readthedocs.io for more information. 00:07:49.467 00:07:49.467 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:49.467 NOTE: several default settings have changed in version 5.15, please make sure 00:07:49.467 this does not affect your deployments: 00:07:49.467 - DUP for metadata (-m dup) 00:07:49.467 - enabled no-holes (-O no-holes) 00:07:49.467 - enabled free-space-tree (-R free-space-tree) 00:07:49.467 00:07:49.467 Label: (null) 00:07:49.467 UUID: 91345389-4696-43fd-b72f-268b8367d53e 00:07:49.467 Node size: 16384 00:07:49.467 Sector size: 4096 (CPU page size: 4096) 00:07:49.467 Filesystem size: 510.00MiB 00:07:49.467 Block group profiles: 00:07:49.467 Data: single 8.00MiB 00:07:49.467 Metadata: DUP 32.00MiB 00:07:49.467 System: DUP 8.00MiB 00:07:49.467 SSD detected: yes 00:07:49.467 Zoned device: no 00:07:49.467 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:49.467 Checksum: crc32c 00:07:49.467 Number of devices: 1 00:07:49.467 Devices: 00:07:49.467 ID SIZE PATH 00:07:49.467 1 510.00MiB /dev/nvme0n1p1 00:07:49.467 00:07:49.467 05:05:46 -- common/autotest_common.sh@931 -- # return 0 00:07:49.467 05:05:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.467 05:05:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.467 05:05:46 -- target/filesystem.sh@25 -- # sync 00:07:49.468 05:05:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.468 05:05:46 -- target/filesystem.sh@27 -- # sync 00:07:49.468 05:05:46 -- target/filesystem.sh@29 -- # i=0 00:07:49.468 05:05:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.468 05:05:46 -- target/filesystem.sh@37 -- # kill -0 138908 00:07:49.468 05:05:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.468 05:05:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.468 05:05:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.468 05:05:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.468 00:07:49.468 real 0m0.231s 00:07:49.468 user 0m0.025s 00:07:49.468 sys 0m0.111s 00:07:49.468 05:05:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.468 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:07:49.468 ************************************ 00:07:49.468 END TEST filesystem_in_capsule_btrfs 00:07:49.468 ************************************ 00:07:49.468 05:05:46 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:49.468 05:05:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.468 05:05:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.468 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:07:49.468 ************************************ 00:07:49.468 START TEST filesystem_in_capsule_xfs 00:07:49.468 ************************************ 00:07:49.468 05:05:46 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:49.468 05:05:46 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:49.468 05:05:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.468 05:05:46 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:49.468 05:05:46 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:49.468 05:05:46 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:49.468 05:05:46 -- common/autotest_common.sh@914 -- # local i=0 00:07:49.468 05:05:46 -- common/autotest_common.sh@915 -- # local force 00:07:49.468 05:05:46 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:49.468 05:05:46 -- common/autotest_common.sh@920 -- # force=-f 00:07:49.468 05:05:46 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:49.728 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:49.728 = sectsz=512 attr=2, projid32bit=1 00:07:49.728 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:49.728 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:49.728 data = bsize=4096 blocks=130560, imaxpct=25 00:07:49.728 = sunit=0 swidth=0 blks 00:07:49.728 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:49.728 log =internal log bsize=4096 blocks=16384, version=2 00:07:49.728 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:49.728 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:49.728 Discarding blocks...Done. 00:07:49.728 05:05:46 -- common/autotest_common.sh@931 -- # return 0 00:07:49.728 05:05:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.728 05:05:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.728 05:05:46 -- target/filesystem.sh@25 -- # sync 00:07:49.728 05:05:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.728 05:05:46 -- target/filesystem.sh@27 -- # sync 00:07:49.728 05:05:46 -- target/filesystem.sh@29 -- # i=0 00:07:49.728 05:05:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.728 05:05:46 -- target/filesystem.sh@37 -- # kill -0 138908 00:07:49.728 05:05:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.728 05:05:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.728 05:05:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.728 05:05:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.728 00:07:49.728 real 0m0.193s 00:07:49.728 user 0m0.028s 00:07:49.728 sys 0m0.062s 00:07:49.728 05:05:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.728 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:07:49.728 ************************************ 00:07:49.728 END TEST filesystem_in_capsule_xfs 00:07:49.728 ************************************ 00:07:49.728 05:05:46 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:49.728 05:05:46 -- target/filesystem.sh@93 -- # sync 00:07:49.728 05:05:46 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:50.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.669 05:05:47 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:50.669 05:05:47 -- common/autotest_common.sh@1208 -- # local i=0 00:07:50.669 05:05:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:50.669 05:05:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.669 05:05:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:50.669 05:05:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.669 05:05:47 -- common/autotest_common.sh@1220 -- # return 0 00:07:50.669 05:05:47 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.669 05:05:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.669 05:05:47 -- common/autotest_common.sh@10 -- # set +x 00:07:50.669 05:05:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.669 05:05:47 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:50.669 05:05:47 -- target/filesystem.sh@101 -- # killprocess 138908 00:07:50.669 05:05:47 -- common/autotest_common.sh@936 -- # '[' -z 138908 ']' 00:07:50.669 05:05:47 -- common/autotest_common.sh@940 -- # kill -0 138908 00:07:50.669 05:05:47 -- common/autotest_common.sh@941 -- # uname 00:07:50.669 05:05:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.669 05:05:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138908 00:07:50.669 05:05:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.669 05:05:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.669 05:05:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138908' 00:07:50.669 killing process with pid 138908 00:07:50.669 05:05:47 -- common/autotest_common.sh@955 -- # kill 138908 00:07:50.669 05:05:47 -- common/autotest_common.sh@960 -- # wait 138908 00:07:51.238 05:05:47 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.238 00:07:51.238 real 0m6.678s 00:07:51.238 user 0m26.045s 00:07:51.238 sys 0m0.975s 00:07:51.238 05:05:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.238 05:05:47 -- common/autotest_common.sh@10 -- # set +x 00:07:51.238 ************************************ 00:07:51.238 END TEST nvmf_filesystem_in_capsule 00:07:51.238 ************************************ 00:07:51.238 05:05:47 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:51.238 05:05:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:51.238 05:05:47 -- nvmf/common.sh@116 -- # sync 00:07:51.238 05:05:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:07:51.238 05:05:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:07:51.238 05:05:47 -- nvmf/common.sh@119 -- # set +e 00:07:51.238 05:05:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:51.238 05:05:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:07:51.238 rmmod nvme_rdma 00:07:51.238 rmmod nvme_fabrics 00:07:51.238 05:05:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:51.238 05:05:47 -- nvmf/common.sh@123 -- # set -e 00:07:51.238 05:05:47 -- nvmf/common.sh@124 -- # return 0 00:07:51.238 05:05:47 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:51.238 05:05:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:51.238 05:05:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:07:51.238 00:07:51.238 real 0m19.931s 00:07:51.238 user 0m56.309s 00:07:51.238 sys 0m6.332s 00:07:51.238 05:05:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.239 05:05:47 -- common/autotest_common.sh@10 -- # set +x 00:07:51.239 ************************************ 00:07:51.239 END TEST nvmf_filesystem 00:07:51.239 ************************************ 00:07:51.239 05:05:47 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:51.239 05:05:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.239 05:05:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.239 05:05:47 -- common/autotest_common.sh@10 -- # set +x 00:07:51.239 ************************************ 00:07:51.239 START TEST nvmf_discovery 00:07:51.239 ************************************ 00:07:51.239 05:05:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:51.239 * Looking for test storage... 00:07:51.239 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:51.239 05:05:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.239 05:05:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.239 05:05:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.500 05:05:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.500 05:05:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.500 05:05:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.500 05:05:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.500 05:05:48 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.500 05:05:48 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.500 05:05:48 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.500 05:05:48 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.500 05:05:48 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.500 05:05:48 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.500 05:05:48 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.500 05:05:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.500 05:05:48 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.500 05:05:48 -- scripts/common.sh@344 -- # : 1 00:07:51.500 05:05:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.500 05:05:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.500 05:05:48 -- scripts/common.sh@364 -- # decimal 1 00:07:51.500 05:05:48 -- scripts/common.sh@352 -- # local d=1 00:07:51.500 05:05:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.500 05:05:48 -- scripts/common.sh@354 -- # echo 1 00:07:51.500 05:05:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.500 05:05:48 -- scripts/common.sh@365 -- # decimal 2 00:07:51.500 05:05:48 -- scripts/common.sh@352 -- # local d=2 00:07:51.500 05:05:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.500 05:05:48 -- scripts/common.sh@354 -- # echo 2 00:07:51.500 05:05:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.500 05:05:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.500 05:05:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.500 05:05:48 -- scripts/common.sh@367 -- # return 0 00:07:51.500 05:05:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.500 05:05:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.500 --rc genhtml_branch_coverage=1 00:07:51.500 --rc genhtml_function_coverage=1 00:07:51.500 --rc genhtml_legend=1 00:07:51.500 --rc geninfo_all_blocks=1 00:07:51.500 --rc geninfo_unexecuted_blocks=1 00:07:51.500 00:07:51.500 ' 00:07:51.500 05:05:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.500 --rc genhtml_branch_coverage=1 00:07:51.500 --rc genhtml_function_coverage=1 00:07:51.500 --rc genhtml_legend=1 00:07:51.500 --rc geninfo_all_blocks=1 00:07:51.500 --rc geninfo_unexecuted_blocks=1 00:07:51.500 00:07:51.500 ' 00:07:51.500 05:05:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.500 --rc genhtml_branch_coverage=1 00:07:51.500 --rc genhtml_function_coverage=1 00:07:51.500 --rc genhtml_legend=1 00:07:51.500 --rc geninfo_all_blocks=1 00:07:51.500 --rc geninfo_unexecuted_blocks=1 00:07:51.500 00:07:51.500 ' 00:07:51.500 05:05:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.500 --rc genhtml_branch_coverage=1 00:07:51.500 --rc genhtml_function_coverage=1 00:07:51.500 --rc genhtml_legend=1 00:07:51.500 --rc geninfo_all_blocks=1 00:07:51.500 --rc geninfo_unexecuted_blocks=1 00:07:51.500 00:07:51.500 ' 00:07:51.500 05:05:48 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.500 05:05:48 -- nvmf/common.sh@7 -- # uname -s 00:07:51.500 05:05:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.500 05:05:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.500 05:05:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.500 05:05:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.500 05:05:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.500 05:05:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.500 05:05:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.500 05:05:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.500 05:05:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.500 05:05:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.500 05:05:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:51.500 05:05:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:51.500 05:05:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.500 05:05:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.500 05:05:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:51.500 05:05:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:51.500 05:05:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.500 05:05:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.500 05:05:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.500 05:05:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.500 05:05:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.500 05:05:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.500 05:05:48 -- paths/export.sh@5 -- # export PATH 00:07:51.500 05:05:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.500 05:05:48 -- nvmf/common.sh@46 -- # : 0 00:07:51.500 05:05:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:51.500 05:05:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:51.500 05:05:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:51.500 05:05:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.500 05:05:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.500 05:05:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:51.500 05:05:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:51.500 05:05:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:51.500 05:05:48 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:51.500 05:05:48 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:51.500 05:05:48 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:51.500 05:05:48 -- target/discovery.sh@15 -- # hash nvme 00:07:51.500 05:05:48 -- target/discovery.sh@20 -- # nvmftestinit 00:07:51.500 05:05:48 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:51.500 05:05:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.500 05:05:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:51.500 05:05:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:51.500 05:05:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:51.501 05:05:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.501 05:05:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.501 05:05:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.501 05:05:48 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:07:51.501 05:05:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:51.501 05:05:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:51.501 05:05:48 -- common/autotest_common.sh@10 -- # set +x 00:07:56.783 05:05:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:56.783 05:05:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:56.783 05:05:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:56.783 05:05:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:56.783 05:05:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:56.783 05:05:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:56.783 05:05:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:56.783 05:05:53 -- nvmf/common.sh@294 -- # net_devs=() 00:07:56.783 05:05:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:56.783 05:05:53 -- nvmf/common.sh@295 -- # e810=() 00:07:56.783 05:05:53 -- nvmf/common.sh@295 -- # local -ga e810 00:07:56.783 05:05:53 -- nvmf/common.sh@296 -- # x722=() 00:07:56.783 05:05:53 -- nvmf/common.sh@296 -- # local -ga x722 00:07:56.783 05:05:53 -- nvmf/common.sh@297 -- # mlx=() 00:07:56.784 05:05:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:56.784 05:05:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.784 05:05:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:56.784 05:05:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:56.784 05:05:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:56.784 05:05:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:56.784 05:05:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:56.784 05:05:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:56.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:56.784 05:05:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:56.784 05:05:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:56.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:56.784 05:05:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:56.784 05:05:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:56.784 05:05:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:07:56.784 05:05:53 -- nvmf/common.sh@376 -- # modinfo irdma 00:07:56.784 05:05:53 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:07:56.784 05:05:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.784 05:05:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:56.784 05:05:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.784 05:05:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:56.784 Found net devices under 0000:af:00.0: cvl_0_0 00:07:56.784 05:05:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.784 05:05:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.784 05:05:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:56.784 05:05:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.784 05:05:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:56.784 Found net devices under 0000:af:00.1: cvl_0_1 00:07:56.784 05:05:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.784 05:05:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:56.784 05:05:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:56.784 05:05:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:56.784 05:05:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:56.784 05:05:53 -- nvmf/common.sh@57 -- # uname 00:07:56.784 05:05:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:56.784 05:05:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:56.784 05:05:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:56.784 05:05:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:56.784 05:05:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:56.784 05:05:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:56.784 05:05:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:56.784 05:05:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:56.784 05:05:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:56.784 05:05:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:56.784 05:05:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:56.784 05:05:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:56.784 05:05:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:56.784 05:05:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:56.784 05:05:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:56.784 05:05:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:56.784 05:05:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:07:56.784 05:05:53 -- nvmf/common.sh@104 -- # continue 2 00:07:56.784 05:05:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.784 05:05:53 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:07:56.784 05:05:53 -- nvmf/common.sh@104 -- # continue 2 00:07:56.784 05:05:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:56.784 05:05:53 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:07:56.784 05:05:53 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:07:56.784 05:05:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:07:56.784 05:05:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:56.784 05:05:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:56.784 05:05:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:56.784 05:05:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:56.784 05:05:53 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:07:56.784 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:56.784 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:56.784 altname enp175s0f0np0 00:07:56.784 altname ens801f0np0 00:07:56.784 inet 192.168.100.8/24 scope global cvl_0_0 00:07:56.784 valid_lft forever preferred_lft forever 00:07:56.784 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:56.784 valid_lft forever preferred_lft forever 00:07:56.784 05:05:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:56.784 05:05:53 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:07:56.784 05:05:53 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:07:56.784 05:05:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:07:56.784 05:05:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:56.784 05:05:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:56.784 05:05:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:56.785 05:05:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:56.785 05:05:53 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:07:56.785 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:56.785 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:56.785 altname enp175s0f1np1 00:07:56.785 altname ens801f1np1 00:07:56.785 inet 192.168.100.9/24 scope global cvl_0_1 00:07:56.785 valid_lft forever preferred_lft forever 00:07:56.785 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:56.785 valid_lft forever preferred_lft forever 00:07:56.785 05:05:53 -- nvmf/common.sh@410 -- # return 0 00:07:56.785 05:05:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:56.785 05:05:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:56.785 05:05:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:56.785 05:05:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:56.785 05:05:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:56.785 05:05:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:56.785 05:05:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:56.785 05:05:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:56.785 05:05:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:56.785 05:05:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:56.785 05:05:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:56.785 05:05:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.785 05:05:53 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:56.785 05:05:53 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:07:56.785 05:05:53 -- nvmf/common.sh@104 -- # continue 2 00:07:56.785 05:05:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:56.785 05:05:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.785 05:05:53 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:56.785 05:05:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.785 05:05:53 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:56.785 05:05:53 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:07:56.785 05:05:53 -- nvmf/common.sh@104 -- # continue 2 00:07:56.785 05:05:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:56.785 05:05:53 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:07:56.785 05:05:53 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:07:56.785 05:05:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:07:56.785 05:05:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:56.785 05:05:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:56.785 05:05:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:56.785 05:05:53 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:07:56.785 05:05:53 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:07:56.785 05:05:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:07:56.785 05:05:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:56.785 05:05:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:56.785 05:05:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:56.785 192.168.100.9' 00:07:56.785 05:05:53 -- nvmf/common.sh@445 -- # head -n 1 00:07:56.785 05:05:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:56.785 192.168.100.9' 00:07:56.785 05:05:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:56.785 05:05:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:56.785 192.168.100.9' 00:07:56.785 05:05:53 -- nvmf/common.sh@446 -- # tail -n +2 00:07:56.785 05:05:53 -- nvmf/common.sh@446 -- # head -n 1 00:07:56.785 05:05:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:56.785 05:05:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:56.785 05:05:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:56.785 05:05:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:56.785 05:05:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:56.785 05:05:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:56.785 05:05:53 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:56.785 05:05:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:56.785 05:05:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.785 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:07:56.785 05:05:53 -- nvmf/common.sh@469 -- # nvmfpid=143324 00:07:56.785 05:05:53 -- nvmf/common.sh@470 -- # waitforlisten 143324 00:07:56.785 05:05:53 -- common/autotest_common.sh@829 -- # '[' -z 143324 ']' 00:07:56.785 05:05:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.785 05:05:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.785 05:05:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.785 05:05:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.785 05:05:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.785 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:07:56.785 [2024-11-20 05:05:53.543593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.785 [2024-11-20 05:05:53.543639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.785 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.785 [2024-11-20 05:05:53.599558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.045 [2024-11-20 05:05:53.675837] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:57.045 [2024-11-20 05:05:53.675945] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.045 [2024-11-20 05:05:53.675954] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.045 [2024-11-20 05:05:53.675960] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.045 [2024-11-20 05:05:53.676001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.045 [2024-11-20 05:05:53.676112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.045 [2024-11-20 05:05:53.676128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.045 [2024-11-20 05:05:53.676129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.615 05:05:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.615 05:05:54 -- common/autotest_common.sh@862 -- # return 0 00:07:57.615 05:05:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:57.615 05:05:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.615 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.615 05:05:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.615 05:05:54 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:57.615 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.615 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.615 [2024-11-20 05:05:54.433111] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1a83100/0x1a82740) succeed. 00:07:57.875 [2024-11-20 05:05:54.441997] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1a84470/0x1a82cc0) succeed. 00:07:57.875 [2024-11-20 05:05:54.442020] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@26 -- # seq 1 4 00:07:57.875 05:05:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.875 05:05:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 Null1 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 [2024-11-20 05:05:54.506438] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.875 05:05:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 Null2 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.875 05:05:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 Null3 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.875 05:05:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:57.875 05:05:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:57.875 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.875 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.875 Null4 00:07:57.876 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.876 05:05:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:57.876 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.876 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.876 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.876 05:05:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:57.876 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.876 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.876 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.876 05:05:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:07:57.876 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.876 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.876 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.876 05:05:54 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:57.876 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.876 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.876 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.876 05:05:54 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:07:57.876 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.876 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.876 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.876 05:05:54 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:07:58.136 00:07:58.136 Discovery Log Number of Records 6, Generation counter 6 00:07:58.136 =====Discovery Log Entry 0====== 00:07:58.136 trtype: rdma 00:07:58.136 adrfam: ipv4 00:07:58.136 subtype: current discovery subsystem 00:07:58.136 treq: not required 00:07:58.136 portid: 0 00:07:58.136 trsvcid: 4420 00:07:58.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:58.136 traddr: 192.168.100.8 00:07:58.136 eflags: explicit discovery connections, duplicate discovery information 00:07:58.136 rdma_prtype: not specified 00:07:58.136 rdma_qptype: connected 00:07:58.136 rdma_cms: rdma-cm 00:07:58.136 rdma_pkey: 0x0000 00:07:58.136 =====Discovery Log Entry 1====== 00:07:58.136 trtype: rdma 00:07:58.136 adrfam: ipv4 00:07:58.136 subtype: nvme subsystem 00:07:58.136 treq: not required 00:07:58.136 portid: 0 00:07:58.136 trsvcid: 4420 00:07:58.136 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:58.136 traddr: 192.168.100.8 00:07:58.136 eflags: none 00:07:58.136 rdma_prtype: not specified 00:07:58.136 rdma_qptype: connected 00:07:58.136 rdma_cms: rdma-cm 00:07:58.136 rdma_pkey: 0x0000 00:07:58.136 =====Discovery Log Entry 2====== 00:07:58.136 trtype: rdma 00:07:58.136 adrfam: ipv4 00:07:58.136 subtype: nvme subsystem 00:07:58.136 treq: not required 00:07:58.136 portid: 0 00:07:58.136 trsvcid: 4420 00:07:58.136 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:58.136 traddr: 192.168.100.8 00:07:58.136 eflags: none 00:07:58.136 rdma_prtype: not specified 00:07:58.136 rdma_qptype: connected 00:07:58.136 rdma_cms: rdma-cm 00:07:58.136 rdma_pkey: 0x0000 00:07:58.136 =====Discovery Log Entry 3====== 00:07:58.136 trtype: rdma 00:07:58.136 adrfam: ipv4 00:07:58.136 subtype: nvme subsystem 00:07:58.136 treq: not required 00:07:58.136 portid: 0 00:07:58.136 trsvcid: 4420 00:07:58.136 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:58.136 traddr: 192.168.100.8 00:07:58.136 eflags: none 00:07:58.136 rdma_prtype: not specified 00:07:58.136 rdma_qptype: connected 00:07:58.136 rdma_cms: rdma-cm 00:07:58.136 rdma_pkey: 0x0000 00:07:58.136 =====Discovery Log Entry 4====== 00:07:58.136 trtype: rdma 00:07:58.136 adrfam: ipv4 00:07:58.136 subtype: nvme subsystem 00:07:58.136 treq: not required 00:07:58.136 portid: 0 00:07:58.136 trsvcid: 4420 00:07:58.136 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:58.136 traddr: 192.168.100.8 00:07:58.136 eflags: none 00:07:58.136 rdma_prtype: not specified 00:07:58.136 rdma_qptype: connected 00:07:58.136 rdma_cms: rdma-cm 00:07:58.136 rdma_pkey: 0x0000 00:07:58.136 =====Discovery Log Entry 5====== 00:07:58.136 trtype: rdma 00:07:58.136 adrfam: ipv4 00:07:58.136 subtype: discovery subsystem referral 00:07:58.136 treq: not required 00:07:58.136 portid: 0 00:07:58.136 trsvcid: 4430 00:07:58.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:58.136 traddr: 192.168.100.8 00:07:58.136 eflags: none 00:07:58.136 rdma_prtype: unrecognized 00:07:58.136 rdma_qptype: unrecognized 00:07:58.136 rdma_cms: unrecognized 00:07:58.136 rdma_pkey: 0x0000 00:07:58.136 05:05:54 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:58.136 Perform nvmf subsystem discovery via RPC 00:07:58.136 05:05:54 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:58.136 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.136 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.136 [2024-11-20 05:05:54.743219] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:58.136 [ 00:07:58.136 { 00:07:58.136 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:58.136 "subtype": "Discovery", 00:07:58.136 "listen_addresses": [ 00:07:58.136 { 00:07:58.136 "transport": "RDMA", 00:07:58.136 "trtype": "RDMA", 00:07:58.136 "adrfam": "IPv4", 00:07:58.136 "traddr": "192.168.100.8", 00:07:58.136 "trsvcid": "4420" 00:07:58.136 } 00:07:58.136 ], 00:07:58.136 "allow_any_host": true, 00:07:58.136 "hosts": [] 00:07:58.136 }, 00:07:58.136 { 00:07:58.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:58.136 "subtype": "NVMe", 00:07:58.136 "listen_addresses": [ 00:07:58.136 { 00:07:58.136 "transport": "RDMA", 00:07:58.136 "trtype": "RDMA", 00:07:58.136 "adrfam": "IPv4", 00:07:58.136 "traddr": "192.168.100.8", 00:07:58.136 "trsvcid": "4420" 00:07:58.136 } 00:07:58.136 ], 00:07:58.136 "allow_any_host": true, 00:07:58.136 "hosts": [], 00:07:58.136 "serial_number": "SPDK00000000000001", 00:07:58.136 "model_number": "SPDK bdev Controller", 00:07:58.136 "max_namespaces": 32, 00:07:58.136 "min_cntlid": 1, 00:07:58.136 "max_cntlid": 65519, 00:07:58.136 "namespaces": [ 00:07:58.136 { 00:07:58.136 "nsid": 1, 00:07:58.136 "bdev_name": "Null1", 00:07:58.136 "name": "Null1", 00:07:58.136 "nguid": "66FE6463E05144ED83583A1FE033F304", 00:07:58.136 "uuid": "66fe6463-e051-44ed-8358-3a1fe033f304" 00:07:58.136 } 00:07:58.136 ] 00:07:58.136 }, 00:07:58.136 { 00:07:58.136 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:58.136 "subtype": "NVMe", 00:07:58.136 "listen_addresses": [ 00:07:58.136 { 00:07:58.136 "transport": "RDMA", 00:07:58.136 "trtype": "RDMA", 00:07:58.136 "adrfam": "IPv4", 00:07:58.136 "traddr": "192.168.100.8", 00:07:58.137 "trsvcid": "4420" 00:07:58.137 } 00:07:58.137 ], 00:07:58.137 "allow_any_host": true, 00:07:58.137 "hosts": [], 00:07:58.137 "serial_number": "SPDK00000000000002", 00:07:58.137 "model_number": "SPDK bdev Controller", 00:07:58.137 "max_namespaces": 32, 00:07:58.137 "min_cntlid": 1, 00:07:58.137 "max_cntlid": 65519, 00:07:58.137 "namespaces": [ 00:07:58.137 { 00:07:58.137 "nsid": 1, 00:07:58.137 "bdev_name": "Null2", 00:07:58.137 "name": "Null2", 00:07:58.137 "nguid": "B258D34F515644EF824380A5C33513AD", 00:07:58.137 "uuid": "b258d34f-5156-44ef-8243-80a5c33513ad" 00:07:58.137 } 00:07:58.137 ] 00:07:58.137 }, 00:07:58.137 { 00:07:58.137 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:58.137 "subtype": "NVMe", 00:07:58.137 "listen_addresses": [ 00:07:58.137 { 00:07:58.137 "transport": "RDMA", 00:07:58.137 "trtype": "RDMA", 00:07:58.137 "adrfam": "IPv4", 00:07:58.137 "traddr": "192.168.100.8", 00:07:58.137 "trsvcid": "4420" 00:07:58.137 } 00:07:58.137 ], 00:07:58.137 "allow_any_host": true, 00:07:58.137 "hosts": [], 00:07:58.137 "serial_number": "SPDK00000000000003", 00:07:58.137 "model_number": "SPDK bdev Controller", 00:07:58.137 "max_namespaces": 32, 00:07:58.137 "min_cntlid": 1, 00:07:58.137 "max_cntlid": 65519, 00:07:58.137 "namespaces": [ 00:07:58.137 { 00:07:58.137 "nsid": 1, 00:07:58.137 "bdev_name": "Null3", 00:07:58.137 "name": "Null3", 00:07:58.137 "nguid": "A8753B74AB60461AAECB5101F4205EE2", 00:07:58.137 "uuid": "a8753b74-ab60-461a-aecb-5101f4205ee2" 00:07:58.137 } 00:07:58.137 ] 00:07:58.137 }, 00:07:58.137 { 00:07:58.137 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:58.137 "subtype": "NVMe", 00:07:58.137 "listen_addresses": [ 00:07:58.137 { 00:07:58.137 "transport": "RDMA", 00:07:58.137 "trtype": "RDMA", 00:07:58.137 "adrfam": "IPv4", 00:07:58.137 "traddr": "192.168.100.8", 00:07:58.137 "trsvcid": "4420" 00:07:58.137 } 00:07:58.137 ], 00:07:58.137 "allow_any_host": true, 00:07:58.137 "hosts": [], 00:07:58.137 "serial_number": "SPDK00000000000004", 00:07:58.137 "model_number": "SPDK bdev Controller", 00:07:58.137 "max_namespaces": 32, 00:07:58.137 "min_cntlid": 1, 00:07:58.137 "max_cntlid": 65519, 00:07:58.137 "namespaces": [ 00:07:58.137 { 00:07:58.137 "nsid": 1, 00:07:58.137 "bdev_name": "Null4", 00:07:58.137 "name": "Null4", 00:07:58.137 "nguid": "DE358E7E401548BCBF56E2106C3001B7", 00:07:58.137 "uuid": "de358e7e-4015-48bc-bf56-e2106c3001b7" 00:07:58.137 } 00:07:58.137 ] 00:07:58.137 } 00:07:58.137 ] 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@42 -- # seq 1 4 00:07:58.137 05:05:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:58.137 05:05:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:58.137 05:05:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:58.137 05:05:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:58.137 05:05:54 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:58.137 05:05:54 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:58.137 05:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.137 05:05:54 -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 05:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.137 05:05:54 -- target/discovery.sh@49 -- # check_bdevs= 00:07:58.137 05:05:54 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:58.137 05:05:54 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:58.137 05:05:54 -- target/discovery.sh@57 -- # nvmftestfini 00:07:58.137 05:05:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:58.137 05:05:54 -- nvmf/common.sh@116 -- # sync 00:07:58.137 05:05:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:07:58.137 05:05:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:07:58.137 05:05:54 -- nvmf/common.sh@119 -- # set +e 00:07:58.137 05:05:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:58.137 05:05:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:07:58.137 rmmod nvme_rdma 00:07:58.137 rmmod nvme_fabrics 00:07:58.137 05:05:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:58.137 05:05:54 -- nvmf/common.sh@123 -- # set -e 00:07:58.137 05:05:54 -- nvmf/common.sh@124 -- # return 0 00:07:58.137 05:05:54 -- nvmf/common.sh@477 -- # '[' -n 143324 ']' 00:07:58.137 05:05:54 -- nvmf/common.sh@478 -- # killprocess 143324 00:07:58.137 05:05:54 -- common/autotest_common.sh@936 -- # '[' -z 143324 ']' 00:07:58.137 05:05:54 -- common/autotest_common.sh@940 -- # kill -0 143324 00:07:58.137 05:05:54 -- common/autotest_common.sh@941 -- # uname 00:07:58.137 05:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:58.138 05:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143324 00:07:58.397 05:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:58.397 05:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:58.397 05:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 143324' 00:07:58.397 killing process with pid 143324 00:07:58.397 05:05:54 -- common/autotest_common.sh@955 -- # kill 143324 00:07:58.397 [2024-11-20 05:05:54.991589] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:58.397 05:05:54 -- common/autotest_common.sh@960 -- # wait 143324 00:07:58.397 05:05:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:58.397 05:05:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:07:58.397 00:07:58.397 real 0m7.291s 00:07:58.397 user 0m7.972s 00:07:58.397 sys 0m4.379s 00:07:58.397 05:05:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.397 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:58.397 ************************************ 00:07:58.397 END TEST nvmf_discovery 00:07:58.397 ************************************ 00:07:58.658 05:05:55 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:58.658 05:05:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:58.658 05:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.658 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:58.658 ************************************ 00:07:58.658 START TEST nvmf_referrals 00:07:58.658 ************************************ 00:07:58.658 05:05:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:58.658 * Looking for test storage... 00:07:58.658 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:58.658 05:05:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:58.658 05:05:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:58.658 05:05:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:58.658 05:05:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:58.658 05:05:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:58.658 05:05:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:58.658 05:05:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:58.658 05:05:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:58.658 05:05:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:58.658 05:05:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.658 05:05:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:58.658 05:05:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:58.658 05:05:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:58.658 05:05:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:58.658 05:05:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:58.658 05:05:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:58.658 05:05:55 -- scripts/common.sh@344 -- # : 1 00:07:58.658 05:05:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:58.658 05:05:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.658 05:05:55 -- scripts/common.sh@364 -- # decimal 1 00:07:58.658 05:05:55 -- scripts/common.sh@352 -- # local d=1 00:07:58.658 05:05:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.658 05:05:55 -- scripts/common.sh@354 -- # echo 1 00:07:58.658 05:05:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:58.658 05:05:55 -- scripts/common.sh@365 -- # decimal 2 00:07:58.658 05:05:55 -- scripts/common.sh@352 -- # local d=2 00:07:58.658 05:05:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.658 05:05:55 -- scripts/common.sh@354 -- # echo 2 00:07:58.658 05:05:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:58.658 05:05:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:58.658 05:05:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:58.658 05:05:55 -- scripts/common.sh@367 -- # return 0 00:07:58.658 05:05:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.658 05:05:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:58.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.658 --rc genhtml_branch_coverage=1 00:07:58.658 --rc genhtml_function_coverage=1 00:07:58.658 --rc genhtml_legend=1 00:07:58.658 --rc geninfo_all_blocks=1 00:07:58.658 --rc geninfo_unexecuted_blocks=1 00:07:58.658 00:07:58.658 ' 00:07:58.658 05:05:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:58.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.658 --rc genhtml_branch_coverage=1 00:07:58.658 --rc genhtml_function_coverage=1 00:07:58.658 --rc genhtml_legend=1 00:07:58.658 --rc geninfo_all_blocks=1 00:07:58.658 --rc geninfo_unexecuted_blocks=1 00:07:58.658 00:07:58.658 ' 00:07:58.658 05:05:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:58.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.658 --rc genhtml_branch_coverage=1 00:07:58.658 --rc genhtml_function_coverage=1 00:07:58.658 --rc genhtml_legend=1 00:07:58.658 --rc geninfo_all_blocks=1 00:07:58.658 --rc geninfo_unexecuted_blocks=1 00:07:58.658 00:07:58.658 ' 00:07:58.658 05:05:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:58.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.658 --rc genhtml_branch_coverage=1 00:07:58.658 --rc genhtml_function_coverage=1 00:07:58.658 --rc genhtml_legend=1 00:07:58.658 --rc geninfo_all_blocks=1 00:07:58.658 --rc geninfo_unexecuted_blocks=1 00:07:58.658 00:07:58.658 ' 00:07:58.658 05:05:55 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.658 05:05:55 -- nvmf/common.sh@7 -- # uname -s 00:07:58.658 05:05:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.658 05:05:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.658 05:05:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.658 05:05:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.658 05:05:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.658 05:05:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.658 05:05:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.658 05:05:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.658 05:05:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.658 05:05:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.658 05:05:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:58.658 05:05:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:58.658 05:05:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.658 05:05:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.658 05:05:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:58.658 05:05:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:58.658 05:05:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.658 05:05:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.658 05:05:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.658 05:05:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.658 05:05:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.659 05:05:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.659 05:05:55 -- paths/export.sh@5 -- # export PATH 00:07:58.659 05:05:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.659 05:05:55 -- nvmf/common.sh@46 -- # : 0 00:07:58.659 05:05:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:58.659 05:05:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:58.659 05:05:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:58.659 05:05:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.659 05:05:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.659 05:05:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:58.659 05:05:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:58.659 05:05:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:58.659 05:05:55 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:58.659 05:05:55 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:58.659 05:05:55 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:58.659 05:05:55 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:58.659 05:05:55 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:58.659 05:05:55 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:58.659 05:05:55 -- target/referrals.sh@37 -- # nvmftestinit 00:07:58.659 05:05:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:58.659 05:05:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.659 05:05:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:58.659 05:05:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:58.659 05:05:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:58.659 05:05:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.659 05:05:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.659 05:05:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.659 05:05:55 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:07:58.659 05:05:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:58.659 05:05:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:58.659 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:08:03.938 05:06:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:03.938 05:06:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:03.938 05:06:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:03.938 05:06:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:03.938 05:06:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:03.938 05:06:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:03.938 05:06:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:03.938 05:06:00 -- nvmf/common.sh@294 -- # net_devs=() 00:08:03.938 05:06:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:03.938 05:06:00 -- nvmf/common.sh@295 -- # e810=() 00:08:03.938 05:06:00 -- nvmf/common.sh@295 -- # local -ga e810 00:08:03.938 05:06:00 -- nvmf/common.sh@296 -- # x722=() 00:08:03.938 05:06:00 -- nvmf/common.sh@296 -- # local -ga x722 00:08:03.938 05:06:00 -- nvmf/common.sh@297 -- # mlx=() 00:08:03.938 05:06:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:03.939 05:06:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.939 05:06:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:03.939 05:06:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:03.939 05:06:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:03.939 05:06:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:03.939 05:06:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:03.939 05:06:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:03.939 05:06:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:03.939 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:03.939 05:06:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.939 05:06:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:03.939 05:06:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:03.939 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:03.939 05:06:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.939 05:06:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:03.939 05:06:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:08:03.939 05:06:00 -- nvmf/common.sh@376 -- # modinfo irdma 00:08:03.939 05:06:00 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:08:03.939 05:06:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:03.939 05:06:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.939 05:06:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:03.939 05:06:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.939 05:06:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:03.939 Found net devices under 0000:af:00.0: cvl_0_0 00:08:03.939 05:06:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.939 05:06:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:03.939 05:06:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.939 05:06:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:03.939 05:06:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.939 05:06:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:03.939 Found net devices under 0000:af:00.1: cvl_0_1 00:08:03.939 05:06:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.939 05:06:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:03.939 05:06:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:03.939 05:06:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:03.939 05:06:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:03.939 05:06:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:03.939 05:06:00 -- nvmf/common.sh@57 -- # uname 00:08:03.939 05:06:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:03.939 05:06:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:03.939 05:06:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:03.939 05:06:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:03.939 05:06:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:03.939 05:06:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:03.939 05:06:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:03.939 05:06:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:03.939 05:06:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:03.939 05:06:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:03.939 05:06:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:03.939 05:06:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.939 05:06:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:03.939 05:06:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:03.939 05:06:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.939 05:06:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:03.939 05:06:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:03.939 05:06:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:08:04.200 05:06:00 -- nvmf/common.sh@104 -- # continue 2 00:08:04.200 05:06:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:08:04.200 05:06:00 -- nvmf/common.sh@104 -- # continue 2 00:08:04.200 05:06:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:04.200 05:06:00 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:08:04.200 05:06:00 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:08:04.200 05:06:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:08:04.200 05:06:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:04.200 05:06:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:04.200 05:06:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:04.200 05:06:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:08:04.200 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:04.200 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:08:04.200 altname enp175s0f0np0 00:08:04.200 altname ens801f0np0 00:08:04.200 inet 192.168.100.8/24 scope global cvl_0_0 00:08:04.200 valid_lft forever preferred_lft forever 00:08:04.200 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:08:04.200 valid_lft forever preferred_lft forever 00:08:04.200 05:06:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:04.200 05:06:00 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:08:04.200 05:06:00 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:08:04.200 05:06:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:08:04.200 05:06:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:04.200 05:06:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:04.200 05:06:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:04.200 05:06:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:08:04.200 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:04.200 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:08:04.200 altname enp175s0f1np1 00:08:04.200 altname ens801f1np1 00:08:04.200 inet 192.168.100.9/24 scope global cvl_0_1 00:08:04.200 valid_lft forever preferred_lft forever 00:08:04.200 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:08:04.200 valid_lft forever preferred_lft forever 00:08:04.200 05:06:00 -- nvmf/common.sh@410 -- # return 0 00:08:04.200 05:06:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:04.200 05:06:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:04.200 05:06:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:04.200 05:06:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:04.200 05:06:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:04.200 05:06:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:04.200 05:06:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:04.200 05:06:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:04.200 05:06:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:04.200 05:06:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:08:04.200 05:06:00 -- nvmf/common.sh@104 -- # continue 2 00:08:04.200 05:06:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.200 05:06:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:04.200 05:06:00 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:08:04.200 05:06:00 -- nvmf/common.sh@104 -- # continue 2 00:08:04.200 05:06:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:04.200 05:06:00 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:08:04.200 05:06:00 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:08:04.201 05:06:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:08:04.201 05:06:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:04.201 05:06:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:04.201 05:06:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:04.201 05:06:00 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:08:04.201 05:06:00 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:08:04.201 05:06:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:04.201 05:06:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:08:04.201 05:06:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:04.201 05:06:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:04.201 192.168.100.9' 00:08:04.201 05:06:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:04.201 192.168.100.9' 00:08:04.201 05:06:00 -- nvmf/common.sh@445 -- # head -n 1 00:08:04.201 05:06:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:04.201 05:06:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:04.201 192.168.100.9' 00:08:04.201 05:06:00 -- nvmf/common.sh@446 -- # tail -n +2 00:08:04.201 05:06:00 -- nvmf/common.sh@446 -- # head -n 1 00:08:04.201 05:06:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:04.201 05:06:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:04.201 05:06:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:04.201 05:06:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:04.201 05:06:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:04.201 05:06:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:04.201 05:06:00 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:04.201 05:06:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:04.201 05:06:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.201 05:06:00 -- common/autotest_common.sh@10 -- # set +x 00:08:04.201 05:06:00 -- nvmf/common.sh@469 -- # nvmfpid=146726 00:08:04.201 05:06:00 -- nvmf/common.sh@470 -- # waitforlisten 146726 00:08:04.201 05:06:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.201 05:06:00 -- common/autotest_common.sh@829 -- # '[' -z 146726 ']' 00:08:04.201 05:06:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.201 05:06:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.201 05:06:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.201 05:06:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.201 05:06:00 -- common/autotest_common.sh@10 -- # set +x 00:08:04.201 [2024-11-20 05:06:00.936836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.201 [2024-11-20 05:06:00.936878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.201 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.201 [2024-11-20 05:06:00.993094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.460 [2024-11-20 05:06:01.070742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.460 [2024-11-20 05:06:01.070857] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.460 [2024-11-20 05:06:01.070866] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.461 [2024-11-20 05:06:01.070876] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.461 [2024-11-20 05:06:01.070930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.461 [2024-11-20 05:06:01.071038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.461 [2024-11-20 05:06:01.071149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.461 [2024-11-20 05:06:01.071151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.030 05:06:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.030 05:06:01 -- common/autotest_common.sh@862 -- # return 0 00:08:05.030 05:06:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:05.030 05:06:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.030 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.030 05:06:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.030 05:06:01 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:05.030 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.030 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.030 [2024-11-20 05:06:01.822198] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xdbb100/0xdba740) succeed. 00:08:05.030 [2024-11-20 05:06:01.831090] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xdbc470/0xdbacc0) succeed. 00:08:05.030 [2024-11-20 05:06:01.831118] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:08:05.030 05:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.030 05:06:01 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:05.030 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.030 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.030 [2024-11-20 05:06:01.843339] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:05.030 05:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.030 05:06:01 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:05.030 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.030 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.030 05:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.030 05:06:01 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:05.030 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.030 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.290 05:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.290 05:06:01 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:05.290 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.290 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.290 05:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.290 05:06:01 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.290 05:06:01 -- target/referrals.sh@48 -- # jq length 00:08:05.290 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.290 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.290 05:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.290 05:06:01 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:05.290 05:06:01 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:05.290 05:06:01 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.290 05:06:01 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.290 05:06:01 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.290 05:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.290 05:06:01 -- target/referrals.sh@21 -- # sort 00:08:05.290 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.290 05:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.290 05:06:01 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:05.290 05:06:01 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:05.290 05:06:01 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:05.290 05:06:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.290 05:06:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.290 05:06:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:05.290 05:06:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.290 05:06:01 -- target/referrals.sh@26 -- # sort 00:08:05.290 05:06:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:05.290 05:06:02 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:05.290 05:06:02 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:05.290 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.290 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.290 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.291 05:06:02 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:05.291 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.291 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.291 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.291 05:06:02 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:05.291 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.291 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.291 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.291 05:06:02 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.291 05:06:02 -- target/referrals.sh@56 -- # jq length 00:08:05.291 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.291 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.550 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.550 05:06:02 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:05.550 05:06:02 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:05.550 05:06:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.550 05:06:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.550 05:06:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:05.550 05:06:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.550 05:06:02 -- target/referrals.sh@26 -- # sort 00:08:05.550 05:06:02 -- target/referrals.sh@26 -- # echo 00:08:05.550 05:06:02 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:05.550 05:06:02 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:05.550 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.550 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.550 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.550 05:06:02 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:05.550 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.550 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.550 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.550 05:06:02 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:05.550 05:06:02 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.550 05:06:02 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.550 05:06:02 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.550 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.550 05:06:02 -- target/referrals.sh@21 -- # sort 00:08:05.550 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.550 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.550 05:06:02 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:05.551 05:06:02 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.551 05:06:02 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:05.551 05:06:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.551 05:06:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.551 05:06:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:05.551 05:06:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.551 05:06:02 -- target/referrals.sh@26 -- # sort 00:08:05.809 05:06:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:05.809 05:06:02 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.809 05:06:02 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:05.809 05:06:02 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:05.809 05:06:02 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.809 05:06:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:05.809 05:06:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.809 05:06:02 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:05.809 05:06:02 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.809 05:06:02 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.809 05:06:02 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:05.809 05:06:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:05.809 05:06:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:06.069 05:06:02 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:06.069 05:06:02 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:06.069 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.069 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:06.069 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.069 05:06:02 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:06.069 05:06:02 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:06.069 05:06:02 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.069 05:06:02 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:06.069 05:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.069 05:06:02 -- target/referrals.sh@21 -- # sort 00:08:06.069 05:06:02 -- common/autotest_common.sh@10 -- # set +x 00:08:06.069 05:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.069 05:06:02 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:06.069 05:06:02 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:06.069 05:06:02 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:06.070 05:06:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:06.070 05:06:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:06.070 05:06:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:06.070 05:06:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:06.070 05:06:02 -- target/referrals.sh@26 -- # sort 00:08:06.329 05:06:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:06.329 05:06:02 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:06.329 05:06:02 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:06.329 05:06:02 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:06.329 05:06:02 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:06.329 05:06:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:06.329 05:06:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:06.329 05:06:03 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:06.329 05:06:03 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:06.329 05:06:03 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:06.329 05:06:03 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:06.329 05:06:03 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:06.329 05:06:03 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:06.589 05:06:03 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:06.589 05:06:03 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:06.589 05:06:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.589 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:08:06.589 05:06:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.589 05:06:03 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.589 05:06:03 -- target/referrals.sh@82 -- # jq length 00:08:06.589 05:06:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.589 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:08:06.589 05:06:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.589 05:06:03 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:06.589 05:06:03 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:06.589 05:06:03 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:06.589 05:06:03 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:06.589 05:06:03 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:06.589 05:06:03 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:06.589 05:06:03 -- target/referrals.sh@26 -- # sort 00:08:06.589 05:06:03 -- target/referrals.sh@26 -- # echo 00:08:06.589 05:06:03 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:06.589 05:06:03 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:06.589 05:06:03 -- target/referrals.sh@86 -- # nvmftestfini 00:08:06.589 05:06:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:06.589 05:06:03 -- nvmf/common.sh@116 -- # sync 00:08:06.589 05:06:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:06.589 05:06:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:06.589 05:06:03 -- nvmf/common.sh@119 -- # set +e 00:08:06.589 05:06:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:06.589 05:06:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:06.589 rmmod nvme_rdma 00:08:06.849 rmmod nvme_fabrics 00:08:06.849 05:06:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:06.849 05:06:03 -- nvmf/common.sh@123 -- # set -e 00:08:06.849 05:06:03 -- nvmf/common.sh@124 -- # return 0 00:08:06.849 05:06:03 -- nvmf/common.sh@477 -- # '[' -n 146726 ']' 00:08:06.849 05:06:03 -- nvmf/common.sh@478 -- # killprocess 146726 00:08:06.849 05:06:03 -- common/autotest_common.sh@936 -- # '[' -z 146726 ']' 00:08:06.849 05:06:03 -- common/autotest_common.sh@940 -- # kill -0 146726 00:08:06.849 05:06:03 -- common/autotest_common.sh@941 -- # uname 00:08:06.849 05:06:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:06.849 05:06:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146726 00:08:06.849 05:06:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:06.849 05:06:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:06.849 05:06:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146726' 00:08:06.849 killing process with pid 146726 00:08:06.849 05:06:03 -- common/autotest_common.sh@955 -- # kill 146726 00:08:06.849 05:06:03 -- common/autotest_common.sh@960 -- # wait 146726 00:08:07.109 05:06:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:07.109 05:06:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:07.109 00:08:07.109 real 0m8.493s 00:08:07.109 user 0m13.241s 00:08:07.109 sys 0m4.759s 00:08:07.109 05:06:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.109 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:08:07.109 ************************************ 00:08:07.109 END TEST nvmf_referrals 00:08:07.109 ************************************ 00:08:07.109 05:06:03 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:07.109 05:06:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:07.109 05:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.109 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:08:07.109 ************************************ 00:08:07.109 START TEST nvmf_connect_disconnect 00:08:07.109 ************************************ 00:08:07.109 05:06:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:07.109 * Looking for test storage... 00:08:07.109 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:08:07.109 05:06:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:07.109 05:06:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:07.109 05:06:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:07.109 05:06:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:07.109 05:06:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:07.109 05:06:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:07.109 05:06:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:07.109 05:06:03 -- scripts/common.sh@335 -- # IFS=.-: 00:08:07.109 05:06:03 -- scripts/common.sh@335 -- # read -ra ver1 00:08:07.109 05:06:03 -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.109 05:06:03 -- scripts/common.sh@336 -- # read -ra ver2 00:08:07.109 05:06:03 -- scripts/common.sh@337 -- # local 'op=<' 00:08:07.109 05:06:03 -- scripts/common.sh@339 -- # ver1_l=2 00:08:07.109 05:06:03 -- scripts/common.sh@340 -- # ver2_l=1 00:08:07.109 05:06:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:07.109 05:06:03 -- scripts/common.sh@343 -- # case "$op" in 00:08:07.109 05:06:03 -- scripts/common.sh@344 -- # : 1 00:08:07.109 05:06:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:07.109 05:06:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.109 05:06:03 -- scripts/common.sh@364 -- # decimal 1 00:08:07.109 05:06:03 -- scripts/common.sh@352 -- # local d=1 00:08:07.109 05:06:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.109 05:06:03 -- scripts/common.sh@354 -- # echo 1 00:08:07.370 05:06:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:07.370 05:06:03 -- scripts/common.sh@365 -- # decimal 2 00:08:07.370 05:06:03 -- scripts/common.sh@352 -- # local d=2 00:08:07.370 05:06:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.370 05:06:03 -- scripts/common.sh@354 -- # echo 2 00:08:07.370 05:06:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:07.370 05:06:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:07.370 05:06:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:07.370 05:06:03 -- scripts/common.sh@367 -- # return 0 00:08:07.370 05:06:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.370 05:06:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:07.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.370 --rc genhtml_branch_coverage=1 00:08:07.370 --rc genhtml_function_coverage=1 00:08:07.370 --rc genhtml_legend=1 00:08:07.370 --rc geninfo_all_blocks=1 00:08:07.370 --rc geninfo_unexecuted_blocks=1 00:08:07.370 00:08:07.370 ' 00:08:07.370 05:06:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:07.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.370 --rc genhtml_branch_coverage=1 00:08:07.370 --rc genhtml_function_coverage=1 00:08:07.370 --rc genhtml_legend=1 00:08:07.370 --rc geninfo_all_blocks=1 00:08:07.370 --rc geninfo_unexecuted_blocks=1 00:08:07.370 00:08:07.370 ' 00:08:07.370 05:06:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:07.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.370 --rc genhtml_branch_coverage=1 00:08:07.370 --rc genhtml_function_coverage=1 00:08:07.370 --rc genhtml_legend=1 00:08:07.370 --rc geninfo_all_blocks=1 00:08:07.370 --rc geninfo_unexecuted_blocks=1 00:08:07.370 00:08:07.370 ' 00:08:07.370 05:06:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:07.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.370 --rc genhtml_branch_coverage=1 00:08:07.370 --rc genhtml_function_coverage=1 00:08:07.370 --rc genhtml_legend=1 00:08:07.370 --rc geninfo_all_blocks=1 00:08:07.370 --rc geninfo_unexecuted_blocks=1 00:08:07.370 00:08:07.370 ' 00:08:07.370 05:06:03 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.370 05:06:03 -- nvmf/common.sh@7 -- # uname -s 00:08:07.370 05:06:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.370 05:06:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.370 05:06:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.370 05:06:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.370 05:06:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.370 05:06:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.370 05:06:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.370 05:06:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.370 05:06:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.370 05:06:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.370 05:06:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:07.370 05:06:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:07.370 05:06:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.370 05:06:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.370 05:06:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:07.370 05:06:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:08:07.370 05:06:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.370 05:06:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.370 05:06:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.370 05:06:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.370 05:06:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.370 05:06:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.370 05:06:03 -- paths/export.sh@5 -- # export PATH 00:08:07.370 05:06:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.370 05:06:03 -- nvmf/common.sh@46 -- # : 0 00:08:07.370 05:06:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.370 05:06:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.370 05:06:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.370 05:06:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.370 05:06:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.370 05:06:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.370 05:06:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.370 05:06:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.370 05:06:03 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.370 05:06:03 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.370 05:06:03 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:07.370 05:06:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:07.370 05:06:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.370 05:06:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.370 05:06:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.370 05:06:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.370 05:06:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.370 05:06:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.370 05:06:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.370 05:06:03 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:08:07.370 05:06:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:07.370 05:06:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:07.370 05:06:03 -- common/autotest_common.sh@10 -- # set +x 00:08:12.652 05:06:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:12.652 05:06:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:12.652 05:06:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:12.652 05:06:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:12.652 05:06:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:12.652 05:06:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:12.652 05:06:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:12.652 05:06:09 -- nvmf/common.sh@294 -- # net_devs=() 00:08:12.652 05:06:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:12.652 05:06:09 -- nvmf/common.sh@295 -- # e810=() 00:08:12.652 05:06:09 -- nvmf/common.sh@295 -- # local -ga e810 00:08:12.652 05:06:09 -- nvmf/common.sh@296 -- # x722=() 00:08:12.652 05:06:09 -- nvmf/common.sh@296 -- # local -ga x722 00:08:12.652 05:06:09 -- nvmf/common.sh@297 -- # mlx=() 00:08:12.652 05:06:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:12.652 05:06:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.652 05:06:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:12.652 05:06:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:12.653 05:06:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:12.653 05:06:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:12.653 05:06:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:12.653 05:06:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:12.653 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:12.653 05:06:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:12.653 05:06:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:12.653 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:12.653 05:06:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:12.653 05:06:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:12.653 05:06:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:08:12.653 05:06:09 -- nvmf/common.sh@376 -- # modinfo irdma 00:08:12.653 05:06:09 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:08:12.653 05:06:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.653 05:06:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:12.653 05:06:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.653 05:06:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:12.653 Found net devices under 0000:af:00.0: cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.653 05:06:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.653 05:06:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:12.653 05:06:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.653 05:06:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:12.653 Found net devices under 0000:af:00.1: cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.653 05:06:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:12.653 05:06:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:12.653 05:06:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:12.653 05:06:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:12.653 05:06:09 -- nvmf/common.sh@57 -- # uname 00:08:12.653 05:06:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:12.653 05:06:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:12.653 05:06:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:12.653 05:06:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:12.653 05:06:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:12.653 05:06:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:12.653 05:06:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:12.653 05:06:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:12.653 05:06:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:12.653 05:06:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:12.653 05:06:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:12.653 05:06:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:12.653 05:06:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:12.653 05:06:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:12.653 05:06:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:12.653 05:06:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:12.653 05:06:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@104 -- # continue 2 00:08:12.653 05:06:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@104 -- # continue 2 00:08:12.653 05:06:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:12.653 05:06:09 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:12.653 05:06:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:12.653 05:06:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:08:12.653 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:12.653 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:08:12.653 altname enp175s0f0np0 00:08:12.653 altname ens801f0np0 00:08:12.653 inet 192.168.100.8/24 scope global cvl_0_0 00:08:12.653 valid_lft forever preferred_lft forever 00:08:12.653 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:08:12.653 valid_lft forever preferred_lft forever 00:08:12.653 05:06:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:12.653 05:06:09 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:12.653 05:06:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:12.653 05:06:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:08:12.653 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:08:12.653 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:08:12.653 altname enp175s0f1np1 00:08:12.653 altname ens801f1np1 00:08:12.653 inet 192.168.100.9/24 scope global cvl_0_1 00:08:12.653 valid_lft forever preferred_lft forever 00:08:12.653 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:08:12.653 valid_lft forever preferred_lft forever 00:08:12.653 05:06:09 -- nvmf/common.sh@410 -- # return 0 00:08:12.653 05:06:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:12.653 05:06:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:12.653 05:06:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:12.653 05:06:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:12.653 05:06:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:12.653 05:06:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:12.653 05:06:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:12.653 05:06:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:12.653 05:06:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:12.653 05:06:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@104 -- # continue 2 00:08:12.653 05:06:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:12.653 05:06:09 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:08:12.653 05:06:09 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@104 -- # continue 2 00:08:12.653 05:06:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:12.653 05:06:09 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:12.653 05:06:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:12.653 05:06:09 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:12.653 05:06:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:12.653 05:06:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:12.653 192.168.100.9' 00:08:12.653 05:06:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:12.653 192.168.100.9' 00:08:12.653 05:06:09 -- nvmf/common.sh@445 -- # head -n 1 00:08:12.653 05:06:09 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:12.653 05:06:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:12.653 192.168.100.9' 00:08:12.653 05:06:09 -- nvmf/common.sh@446 -- # tail -n +2 00:08:12.653 05:06:09 -- nvmf/common.sh@446 -- # head -n 1 00:08:12.653 05:06:09 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:12.654 05:06:09 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:12.654 05:06:09 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:12.654 05:06:09 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:12.654 05:06:09 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:12.654 05:06:09 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:12.654 05:06:09 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:12.654 05:06:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:12.654 05:06:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.654 05:06:09 -- common/autotest_common.sh@10 -- # set +x 00:08:12.654 05:06:09 -- nvmf/common.sh@469 -- # nvmfpid=150898 00:08:12.654 05:06:09 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.654 05:06:09 -- nvmf/common.sh@470 -- # waitforlisten 150898 00:08:12.654 05:06:09 -- common/autotest_common.sh@829 -- # '[' -z 150898 ']' 00:08:12.654 05:06:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.654 05:06:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.654 05:06:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.654 05:06:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.654 05:06:09 -- common/autotest_common.sh@10 -- # set +x 00:08:12.654 [2024-11-20 05:06:09.425117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.654 [2024-11-20 05:06:09.425158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.654 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.913 [2024-11-20 05:06:09.482142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.913 [2024-11-20 05:06:09.553115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.913 [2024-11-20 05:06:09.553223] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.913 [2024-11-20 05:06:09.553230] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.913 [2024-11-20 05:06:09.553236] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.913 [2024-11-20 05:06:09.553330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.913 [2024-11-20 05:06:09.553428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.913 [2024-11-20 05:06:09.553517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.913 [2024-11-20 05:06:09.553518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.483 05:06:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.483 05:06:10 -- common/autotest_common.sh@862 -- # return 0 00:08:13.483 05:06:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:13.483 05:06:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.483 05:06:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.483 05:06:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.483 05:06:10 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:13.483 05:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.483 05:06:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.483 [2024-11-20 05:06:10.294509] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:13.483 [2024-11-20 05:06:10.307328] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1295100/0x1294740) succeed. 00:08:13.742 [2024-11-20 05:06:10.316248] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1296470/0x1294cc0) succeed. 00:08:13.742 [2024-11-20 05:06:10.316272] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:08:13.742 05:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.742 05:06:10 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:13.742 05:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.742 05:06:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.742 05:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.742 05:06:10 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:13.742 05:06:10 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:13.742 05:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.742 05:06:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.742 05:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.742 05:06:10 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:13.742 05:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.742 05:06:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.742 05:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.742 05:06:10 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:13.742 05:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.742 05:06:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.742 [2024-11-20 05:06:10.375433] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:13.742 05:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.742 05:06:10 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:13.742 05:06:10 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:13.742 05:06:10 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:13.743 05:06:10 -- target/connect_disconnect.sh@34 -- # set +x 00:08:16.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.875 05:10:42 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:45.875 05:10:42 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:45.875 05:10:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:45.875 05:10:42 -- nvmf/common.sh@116 -- # sync 00:12:45.875 05:10:42 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:12:45.875 05:10:42 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:12:45.875 05:10:42 -- nvmf/common.sh@119 -- # set +e 00:12:45.875 05:10:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:45.875 05:10:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:12:45.875 rmmod nvme_rdma 00:12:45.875 rmmod nvme_fabrics 00:12:45.875 05:10:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:45.875 05:10:42 -- nvmf/common.sh@123 -- # set -e 00:12:45.875 05:10:42 -- nvmf/common.sh@124 -- # return 0 00:12:45.875 05:10:42 -- nvmf/common.sh@477 -- # '[' -n 150898 ']' 00:12:45.875 05:10:42 -- nvmf/common.sh@478 -- # killprocess 150898 00:12:45.875 05:10:42 -- common/autotest_common.sh@936 -- # '[' -z 150898 ']' 00:12:45.875 05:10:42 -- common/autotest_common.sh@940 -- # kill -0 150898 00:12:45.875 05:10:42 -- common/autotest_common.sh@941 -- # uname 00:12:45.875 05:10:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:45.875 05:10:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150898 00:12:45.875 05:10:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:45.875 05:10:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:45.875 05:10:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150898' 00:12:45.875 killing process with pid 150898 00:12:45.875 05:10:42 -- common/autotest_common.sh@955 -- # kill 150898 00:12:45.875 05:10:42 -- common/autotest_common.sh@960 -- # wait 150898 00:12:45.875 05:10:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:45.875 05:10:42 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:12:45.875 00:12:45.875 real 4m38.714s 00:12:45.875 user 18m10.677s 00:12:45.875 sys 0m16.740s 00:12:45.875 05:10:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.875 05:10:42 -- common/autotest_common.sh@10 -- # set +x 00:12:45.875 ************************************ 00:12:45.875 END TEST nvmf_connect_disconnect 00:12:45.875 ************************************ 00:12:45.875 05:10:42 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:45.875 05:10:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:45.875 05:10:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.875 05:10:42 -- common/autotest_common.sh@10 -- # set +x 00:12:45.875 ************************************ 00:12:45.875 START TEST nvmf_multitarget 00:12:45.875 ************************************ 00:12:45.875 05:10:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:45.875 * Looking for test storage... 00:12:45.875 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:45.875 05:10:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:45.875 05:10:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:45.875 05:10:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:45.875 05:10:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:45.875 05:10:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:45.875 05:10:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:45.875 05:10:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:45.875 05:10:42 -- scripts/common.sh@335 -- # IFS=.-: 00:12:45.875 05:10:42 -- scripts/common.sh@335 -- # read -ra ver1 00:12:45.875 05:10:42 -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.875 05:10:42 -- scripts/common.sh@336 -- # read -ra ver2 00:12:45.875 05:10:42 -- scripts/common.sh@337 -- # local 'op=<' 00:12:45.875 05:10:42 -- scripts/common.sh@339 -- # ver1_l=2 00:12:45.875 05:10:42 -- scripts/common.sh@340 -- # ver2_l=1 00:12:45.875 05:10:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:45.875 05:10:42 -- scripts/common.sh@343 -- # case "$op" in 00:12:45.875 05:10:42 -- scripts/common.sh@344 -- # : 1 00:12:45.875 05:10:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:45.875 05:10:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.875 05:10:42 -- scripts/common.sh@364 -- # decimal 1 00:12:46.136 05:10:42 -- scripts/common.sh@352 -- # local d=1 00:12:46.136 05:10:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.136 05:10:42 -- scripts/common.sh@354 -- # echo 1 00:12:46.136 05:10:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:46.136 05:10:42 -- scripts/common.sh@365 -- # decimal 2 00:12:46.136 05:10:42 -- scripts/common.sh@352 -- # local d=2 00:12:46.136 05:10:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.136 05:10:42 -- scripts/common.sh@354 -- # echo 2 00:12:46.136 05:10:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:46.136 05:10:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:46.136 05:10:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:46.136 05:10:42 -- scripts/common.sh@367 -- # return 0 00:12:46.136 05:10:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.136 05:10:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:46.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.136 --rc genhtml_branch_coverage=1 00:12:46.136 --rc genhtml_function_coverage=1 00:12:46.136 --rc genhtml_legend=1 00:12:46.136 --rc geninfo_all_blocks=1 00:12:46.136 --rc geninfo_unexecuted_blocks=1 00:12:46.136 00:12:46.136 ' 00:12:46.136 05:10:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:46.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.136 --rc genhtml_branch_coverage=1 00:12:46.136 --rc genhtml_function_coverage=1 00:12:46.136 --rc genhtml_legend=1 00:12:46.136 --rc geninfo_all_blocks=1 00:12:46.136 --rc geninfo_unexecuted_blocks=1 00:12:46.136 00:12:46.136 ' 00:12:46.136 05:10:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:46.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.136 --rc genhtml_branch_coverage=1 00:12:46.136 --rc genhtml_function_coverage=1 00:12:46.136 --rc genhtml_legend=1 00:12:46.136 --rc geninfo_all_blocks=1 00:12:46.136 --rc geninfo_unexecuted_blocks=1 00:12:46.136 00:12:46.136 ' 00:12:46.136 05:10:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:46.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.136 --rc genhtml_branch_coverage=1 00:12:46.136 --rc genhtml_function_coverage=1 00:12:46.136 --rc genhtml_legend=1 00:12:46.136 --rc geninfo_all_blocks=1 00:12:46.136 --rc geninfo_unexecuted_blocks=1 00:12:46.136 00:12:46.136 ' 00:12:46.136 05:10:42 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.136 05:10:42 -- nvmf/common.sh@7 -- # uname -s 00:12:46.136 05:10:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.136 05:10:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.136 05:10:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.136 05:10:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.136 05:10:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.136 05:10:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.136 05:10:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.136 05:10:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.136 05:10:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.136 05:10:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.136 05:10:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:46.136 05:10:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:46.136 05:10:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.136 05:10:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.136 05:10:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:46.136 05:10:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:46.136 05:10:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.136 05:10:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.136 05:10:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.136 05:10:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.136 05:10:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.136 05:10:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.136 05:10:42 -- paths/export.sh@5 -- # export PATH 00:12:46.136 05:10:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.136 05:10:42 -- nvmf/common.sh@46 -- # : 0 00:12:46.136 05:10:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:46.136 05:10:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:46.136 05:10:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:46.136 05:10:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.136 05:10:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.136 05:10:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:46.136 05:10:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:46.136 05:10:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:46.136 05:10:42 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.136 05:10:42 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:46.136 05:10:42 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:12:46.136 05:10:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.136 05:10:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:46.136 05:10:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:46.136 05:10:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:46.136 05:10:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.136 05:10:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.136 05:10:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.136 05:10:42 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:12:46.136 05:10:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:46.136 05:10:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:46.136 05:10:42 -- common/autotest_common.sh@10 -- # set +x 00:12:51.418 05:10:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:51.418 05:10:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:51.418 05:10:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:51.418 05:10:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:51.418 05:10:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:51.418 05:10:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:51.418 05:10:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:51.418 05:10:47 -- nvmf/common.sh@294 -- # net_devs=() 00:12:51.418 05:10:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:51.418 05:10:47 -- nvmf/common.sh@295 -- # e810=() 00:12:51.418 05:10:47 -- nvmf/common.sh@295 -- # local -ga e810 00:12:51.418 05:10:47 -- nvmf/common.sh@296 -- # x722=() 00:12:51.418 05:10:47 -- nvmf/common.sh@296 -- # local -ga x722 00:12:51.418 05:10:47 -- nvmf/common.sh@297 -- # mlx=() 00:12:51.418 05:10:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:51.418 05:10:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.418 05:10:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:51.418 05:10:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:12:51.418 05:10:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:12:51.418 05:10:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:12:51.418 05:10:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:51.418 05:10:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:51.418 05:10:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:51.418 05:10:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:51.418 05:10:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.418 05:10:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:51.418 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:51.418 05:10:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.419 05:10:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.419 05:10:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:51.419 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:51.419 05:10:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.419 05:10:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:51.419 05:10:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:12:51.419 05:10:47 -- nvmf/common.sh@376 -- # modinfo irdma 00:12:51.419 05:10:47 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:12:51.419 05:10:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.419 05:10:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.419 05:10:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.419 05:10:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.419 05:10:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:51.419 Found net devices under 0000:af:00.0: cvl_0_0 00:12:51.419 05:10:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.419 05:10:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.419 05:10:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.419 05:10:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.419 05:10:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.419 05:10:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:51.419 Found net devices under 0000:af:00.1: cvl_0_1 00:12:51.419 05:10:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.419 05:10:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:51.419 05:10:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:51.419 05:10:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:12:51.419 05:10:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:12:51.419 05:10:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:12:51.419 05:10:47 -- nvmf/common.sh@57 -- # uname 00:12:51.419 05:10:47 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:12:51.419 05:10:47 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:12:51.419 05:10:47 -- nvmf/common.sh@62 -- # modprobe ib_core 00:12:51.419 05:10:47 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:12:51.419 05:10:47 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:12:51.419 05:10:47 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:12:51.419 05:10:47 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:12:51.419 05:10:47 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:12:51.419 05:10:47 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:12:51.419 05:10:47 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:51.419 05:10:47 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:12:51.419 05:10:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.419 05:10:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:12:51.419 05:10:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:12:51.419 05:10:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.419 05:10:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:12:51.419 05:10:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:12:51.419 05:10:48 -- nvmf/common.sh@104 -- # continue 2 00:12:51.419 05:10:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:12:51.419 05:10:48 -- nvmf/common.sh@104 -- # continue 2 00:12:51.419 05:10:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:12:51.419 05:10:48 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:12:51.419 05:10:48 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:12:51.419 05:10:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:12:51.419 05:10:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:12:51.419 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:51.419 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:51.419 altname enp175s0f0np0 00:12:51.419 altname ens801f0np0 00:12:51.419 inet 192.168.100.8/24 scope global cvl_0_0 00:12:51.419 valid_lft forever preferred_lft forever 00:12:51.419 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:51.419 valid_lft forever preferred_lft forever 00:12:51.419 05:10:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:12:51.419 05:10:48 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:12:51.419 05:10:48 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:12:51.419 05:10:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:12:51.419 05:10:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:12:51.419 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:51.419 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:51.419 altname enp175s0f1np1 00:12:51.419 altname ens801f1np1 00:12:51.419 inet 192.168.100.9/24 scope global cvl_0_1 00:12:51.419 valid_lft forever preferred_lft forever 00:12:51.419 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:51.419 valid_lft forever preferred_lft forever 00:12:51.419 05:10:48 -- nvmf/common.sh@410 -- # return 0 00:12:51.419 05:10:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:51.419 05:10:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:51.419 05:10:48 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:12:51.419 05:10:48 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:12:51.419 05:10:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.419 05:10:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:12:51.419 05:10:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:12:51.419 05:10:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.419 05:10:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:12:51.419 05:10:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:12:51.419 05:10:48 -- nvmf/common.sh@104 -- # continue 2 00:12:51.419 05:10:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.419 05:10:48 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:51.419 05:10:48 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:12:51.419 05:10:48 -- nvmf/common.sh@104 -- # continue 2 00:12:51.419 05:10:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:12:51.419 05:10:48 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:12:51.419 05:10:48 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:12:51.419 05:10:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:12:51.419 05:10:48 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:12:51.419 05:10:48 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:12:51.419 05:10:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:12:51.419 05:10:48 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:12:51.419 192.168.100.9' 00:12:51.419 05:10:48 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:12:51.419 192.168.100.9' 00:12:51.419 05:10:48 -- nvmf/common.sh@445 -- # head -n 1 00:12:51.419 05:10:48 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:51.419 05:10:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:51.419 192.168.100.9' 00:12:51.419 05:10:48 -- nvmf/common.sh@446 -- # tail -n +2 00:12:51.419 05:10:48 -- nvmf/common.sh@446 -- # head -n 1 00:12:51.419 05:10:48 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:51.419 05:10:48 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:12:51.419 05:10:48 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:51.419 05:10:48 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:12:51.419 05:10:48 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:12:51.419 05:10:48 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:12:51.419 05:10:48 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:51.419 05:10:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:51.420 05:10:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.420 05:10:48 -- common/autotest_common.sh@10 -- # set +x 00:12:51.420 05:10:48 -- nvmf/common.sh@469 -- # nvmfpid=201358 00:12:51.420 05:10:48 -- nvmf/common.sh@470 -- # waitforlisten 201358 00:12:51.420 05:10:48 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.420 05:10:48 -- common/autotest_common.sh@829 -- # '[' -z 201358 ']' 00:12:51.420 05:10:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.420 05:10:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.420 05:10:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.420 05:10:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.420 05:10:48 -- common/autotest_common.sh@10 -- # set +x 00:12:51.420 [2024-11-20 05:10:48.197535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:51.420 [2024-11-20 05:10:48.197576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.420 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.679 [2024-11-20 05:10:48.252176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.679 [2024-11-20 05:10:48.330186] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:51.679 [2024-11-20 05:10:48.330292] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.679 [2024-11-20 05:10:48.330300] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.679 [2024-11-20 05:10:48.330306] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.679 [2024-11-20 05:10:48.330350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.679 [2024-11-20 05:10:48.330444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.679 [2024-11-20 05:10:48.330511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.679 [2024-11-20 05:10:48.330512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.249 05:10:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.249 05:10:49 -- common/autotest_common.sh@862 -- # return 0 00:12:52.249 05:10:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:52.249 05:10:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.249 05:10:49 -- common/autotest_common.sh@10 -- # set +x 00:12:52.249 05:10:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.249 05:10:49 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.249 05:10:49 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.249 05:10:49 -- target/multitarget.sh@21 -- # jq length 00:12:52.508 05:10:49 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:52.508 05:10:49 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:52.508 "nvmf_tgt_1" 00:12:52.508 05:10:49 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:52.767 "nvmf_tgt_2" 00:12:52.767 05:10:49 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.767 05:10:49 -- target/multitarget.sh@28 -- # jq length 00:12:52.767 05:10:49 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:52.767 05:10:49 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:52.767 true 00:12:52.767 05:10:49 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:53.026 true 00:12:53.026 05:10:49 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.026 05:10:49 -- target/multitarget.sh@35 -- # jq length 00:12:53.026 05:10:49 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:53.026 05:10:49 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:53.026 05:10:49 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:53.026 05:10:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:53.026 05:10:49 -- nvmf/common.sh@116 -- # sync 00:12:53.026 05:10:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:12:53.026 05:10:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:12:53.026 05:10:49 -- nvmf/common.sh@119 -- # set +e 00:12:53.026 05:10:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:53.026 05:10:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:12:53.026 rmmod nvme_rdma 00:12:53.026 rmmod nvme_fabrics 00:12:53.026 05:10:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:53.026 05:10:49 -- nvmf/common.sh@123 -- # set -e 00:12:53.026 05:10:49 -- nvmf/common.sh@124 -- # return 0 00:12:53.026 05:10:49 -- nvmf/common.sh@477 -- # '[' -n 201358 ']' 00:12:53.026 05:10:49 -- nvmf/common.sh@478 -- # killprocess 201358 00:12:53.026 05:10:49 -- common/autotest_common.sh@936 -- # '[' -z 201358 ']' 00:12:53.026 05:10:49 -- common/autotest_common.sh@940 -- # kill -0 201358 00:12:53.026 05:10:49 -- common/autotest_common.sh@941 -- # uname 00:12:53.026 05:10:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:53.026 05:10:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 201358 00:12:53.285 05:10:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:53.285 05:10:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:53.285 05:10:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 201358' 00:12:53.285 killing process with pid 201358 00:12:53.285 05:10:49 -- common/autotest_common.sh@955 -- # kill 201358 00:12:53.285 05:10:49 -- common/autotest_common.sh@960 -- # wait 201358 00:12:53.285 05:10:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:53.285 05:10:50 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:12:53.285 00:12:53.285 real 0m7.558s 00:12:53.285 user 0m9.512s 00:12:53.285 sys 0m4.452s 00:12:53.285 05:10:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:53.285 05:10:50 -- common/autotest_common.sh@10 -- # set +x 00:12:53.285 ************************************ 00:12:53.285 END TEST nvmf_multitarget 00:12:53.285 ************************************ 00:12:53.545 05:10:50 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:53.545 05:10:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:53.545 05:10:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.545 05:10:50 -- common/autotest_common.sh@10 -- # set +x 00:12:53.545 ************************************ 00:12:53.545 START TEST nvmf_rpc 00:12:53.545 ************************************ 00:12:53.545 05:10:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:53.545 * Looking for test storage... 00:12:53.545 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:53.545 05:10:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:53.545 05:10:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:53.545 05:10:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:53.545 05:10:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:53.545 05:10:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:53.545 05:10:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:53.545 05:10:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:53.545 05:10:50 -- scripts/common.sh@335 -- # IFS=.-: 00:12:53.545 05:10:50 -- scripts/common.sh@335 -- # read -ra ver1 00:12:53.546 05:10:50 -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.546 05:10:50 -- scripts/common.sh@336 -- # read -ra ver2 00:12:53.546 05:10:50 -- scripts/common.sh@337 -- # local 'op=<' 00:12:53.546 05:10:50 -- scripts/common.sh@339 -- # ver1_l=2 00:12:53.546 05:10:50 -- scripts/common.sh@340 -- # ver2_l=1 00:12:53.546 05:10:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:53.546 05:10:50 -- scripts/common.sh@343 -- # case "$op" in 00:12:53.546 05:10:50 -- scripts/common.sh@344 -- # : 1 00:12:53.546 05:10:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:53.546 05:10:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.546 05:10:50 -- scripts/common.sh@364 -- # decimal 1 00:12:53.546 05:10:50 -- scripts/common.sh@352 -- # local d=1 00:12:53.546 05:10:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.546 05:10:50 -- scripts/common.sh@354 -- # echo 1 00:12:53.546 05:10:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:53.546 05:10:50 -- scripts/common.sh@365 -- # decimal 2 00:12:53.546 05:10:50 -- scripts/common.sh@352 -- # local d=2 00:12:53.546 05:10:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.546 05:10:50 -- scripts/common.sh@354 -- # echo 2 00:12:53.546 05:10:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:53.546 05:10:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:53.546 05:10:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:53.546 05:10:50 -- scripts/common.sh@367 -- # return 0 00:12:53.546 05:10:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.546 05:10:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:53.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.546 --rc genhtml_branch_coverage=1 00:12:53.546 --rc genhtml_function_coverage=1 00:12:53.546 --rc genhtml_legend=1 00:12:53.546 --rc geninfo_all_blocks=1 00:12:53.546 --rc geninfo_unexecuted_blocks=1 00:12:53.546 00:12:53.546 ' 00:12:53.546 05:10:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:53.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.546 --rc genhtml_branch_coverage=1 00:12:53.546 --rc genhtml_function_coverage=1 00:12:53.546 --rc genhtml_legend=1 00:12:53.546 --rc geninfo_all_blocks=1 00:12:53.546 --rc geninfo_unexecuted_blocks=1 00:12:53.546 00:12:53.546 ' 00:12:53.546 05:10:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:53.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.546 --rc genhtml_branch_coverage=1 00:12:53.546 --rc genhtml_function_coverage=1 00:12:53.546 --rc genhtml_legend=1 00:12:53.546 --rc geninfo_all_blocks=1 00:12:53.546 --rc geninfo_unexecuted_blocks=1 00:12:53.546 00:12:53.546 ' 00:12:53.546 05:10:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:53.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.546 --rc genhtml_branch_coverage=1 00:12:53.546 --rc genhtml_function_coverage=1 00:12:53.546 --rc genhtml_legend=1 00:12:53.546 --rc geninfo_all_blocks=1 00:12:53.546 --rc geninfo_unexecuted_blocks=1 00:12:53.546 00:12:53.546 ' 00:12:53.546 05:10:50 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.546 05:10:50 -- nvmf/common.sh@7 -- # uname -s 00:12:53.546 05:10:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.546 05:10:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.546 05:10:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.546 05:10:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.546 05:10:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.546 05:10:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.546 05:10:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.546 05:10:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.546 05:10:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.546 05:10:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.546 05:10:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:53.546 05:10:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:53.546 05:10:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.546 05:10:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.546 05:10:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:53.546 05:10:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:53.546 05:10:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.546 05:10:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.546 05:10:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.546 05:10:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.546 05:10:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.546 05:10:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.546 05:10:50 -- paths/export.sh@5 -- # export PATH 00:12:53.546 05:10:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.546 05:10:50 -- nvmf/common.sh@46 -- # : 0 00:12:53.546 05:10:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:53.546 05:10:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:53.546 05:10:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:53.546 05:10:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.546 05:10:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.546 05:10:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:53.546 05:10:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:53.546 05:10:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:53.546 05:10:50 -- target/rpc.sh@11 -- # loops=5 00:12:53.546 05:10:50 -- target/rpc.sh@23 -- # nvmftestinit 00:12:53.546 05:10:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:12:53.546 05:10:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.546 05:10:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:53.546 05:10:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:53.546 05:10:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:53.546 05:10:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.547 05:10:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.547 05:10:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.547 05:10:50 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:12:53.547 05:10:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:53.547 05:10:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:53.547 05:10:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.827 05:10:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:58.827 05:10:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:58.827 05:10:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:58.827 05:10:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:58.827 05:10:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:58.827 05:10:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:58.827 05:10:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:58.827 05:10:55 -- nvmf/common.sh@294 -- # net_devs=() 00:12:58.827 05:10:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:58.827 05:10:55 -- nvmf/common.sh@295 -- # e810=() 00:12:58.827 05:10:55 -- nvmf/common.sh@295 -- # local -ga e810 00:12:58.827 05:10:55 -- nvmf/common.sh@296 -- # x722=() 00:12:58.827 05:10:55 -- nvmf/common.sh@296 -- # local -ga x722 00:12:58.827 05:10:55 -- nvmf/common.sh@297 -- # mlx=() 00:12:58.827 05:10:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:58.827 05:10:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.827 05:10:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:58.827 05:10:55 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:12:58.827 05:10:55 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:12:58.827 05:10:55 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:12:58.827 05:10:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:58.827 05:10:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:58.827 05:10:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:58.827 05:10:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:58.827 05:10:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:58.828 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:58.828 05:10:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:12:58.828 05:10:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:58.828 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:58.828 05:10:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:12:58.828 05:10:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:58.828 05:10:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:12:58.828 05:10:55 -- nvmf/common.sh@376 -- # modinfo irdma 00:12:58.828 05:10:55 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:12:58.828 05:10:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.828 05:10:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:58.828 05:10:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.828 05:10:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:58.828 Found net devices under 0000:af:00.0: cvl_0_0 00:12:58.828 05:10:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.828 05:10:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.828 05:10:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:58.828 05:10:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.828 05:10:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:58.828 Found net devices under 0000:af:00.1: cvl_0_1 00:12:58.828 05:10:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.828 05:10:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:58.828 05:10:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:58.828 05:10:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@408 -- # rdma_device_init 00:12:58.828 05:10:55 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:12:58.828 05:10:55 -- nvmf/common.sh@57 -- # uname 00:12:58.828 05:10:55 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:12:58.828 05:10:55 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:12:58.828 05:10:55 -- nvmf/common.sh@62 -- # modprobe ib_core 00:12:58.828 05:10:55 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:12:58.828 05:10:55 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:12:58.828 05:10:55 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:12:58.828 05:10:55 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:12:58.828 05:10:55 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:12:58.828 05:10:55 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:12:58.828 05:10:55 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:58.828 05:10:55 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:12:58.828 05:10:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:58.828 05:10:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:12:58.828 05:10:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:12:58.828 05:10:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:58.828 05:10:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:12:58.828 05:10:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:12:58.828 05:10:55 -- nvmf/common.sh@104 -- # continue 2 00:12:58.828 05:10:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:12:58.828 05:10:55 -- nvmf/common.sh@104 -- # continue 2 00:12:58.828 05:10:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:12:58.828 05:10:55 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:12:58.828 05:10:55 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:12:58.828 05:10:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:12:58.828 05:10:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:12:58.828 05:10:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:12:58.828 05:10:55 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:12:58.828 05:10:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:12:58.828 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:58.828 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:58.828 altname enp175s0f0np0 00:12:58.828 altname ens801f0np0 00:12:58.828 inet 192.168.100.8/24 scope global cvl_0_0 00:12:58.828 valid_lft forever preferred_lft forever 00:12:58.828 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:58.828 valid_lft forever preferred_lft forever 00:12:58.828 05:10:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:12:58.828 05:10:55 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:12:58.828 05:10:55 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:12:58.828 05:10:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:12:58.828 05:10:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:12:58.828 05:10:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:12:58.828 05:10:55 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:12:58.828 05:10:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:12:58.828 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:58.828 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:58.828 altname enp175s0f1np1 00:12:58.828 altname ens801f1np1 00:12:58.828 inet 192.168.100.9/24 scope global cvl_0_1 00:12:58.828 valid_lft forever preferred_lft forever 00:12:58.828 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:58.828 valid_lft forever preferred_lft forever 00:12:58.828 05:10:55 -- nvmf/common.sh@410 -- # return 0 00:12:58.828 05:10:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:58.828 05:10:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:58.828 05:10:55 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:12:58.828 05:10:55 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:12:58.828 05:10:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:58.828 05:10:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:12:58.828 05:10:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:12:58.828 05:10:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:58.828 05:10:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:12:58.828 05:10:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:12:58.828 05:10:55 -- nvmf/common.sh@104 -- # continue 2 00:12:58.828 05:10:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.828 05:10:55 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:58.828 05:10:55 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:12:58.828 05:10:55 -- nvmf/common.sh@104 -- # continue 2 00:12:58.829 05:10:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:12:58.829 05:10:55 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:12:58.829 05:10:55 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:12:58.829 05:10:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:12:58.829 05:10:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:12:58.829 05:10:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:12:58.829 05:10:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:12:58.829 05:10:55 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:12:58.829 05:10:55 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:12:58.829 05:10:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:12:58.829 05:10:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:12:58.829 05:10:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:12:58.829 05:10:55 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:12:58.829 192.168.100.9' 00:12:58.829 05:10:55 -- nvmf/common.sh@445 -- # head -n 1 00:12:58.829 05:10:55 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:12:58.829 192.168.100.9' 00:12:58.829 05:10:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:58.829 05:10:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:58.829 192.168.100.9' 00:12:58.829 05:10:55 -- nvmf/common.sh@446 -- # tail -n +2 00:12:58.829 05:10:55 -- nvmf/common.sh@446 -- # head -n 1 00:12:58.829 05:10:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:58.829 05:10:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:12:58.829 05:10:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:58.829 05:10:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:12:58.829 05:10:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:12:58.829 05:10:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:12:58.829 05:10:55 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:58.829 05:10:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:58.829 05:10:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.829 05:10:55 -- common/autotest_common.sh@10 -- # set +x 00:12:58.829 05:10:55 -- nvmf/common.sh@469 -- # nvmfpid=204767 00:12:58.829 05:10:55 -- nvmf/common.sh@470 -- # waitforlisten 204767 00:12:58.829 05:10:55 -- common/autotest_common.sh@829 -- # '[' -z 204767 ']' 00:12:58.829 05:10:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.829 05:10:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.829 05:10:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.829 05:10:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.829 05:10:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.829 05:10:55 -- common/autotest_common.sh@10 -- # set +x 00:12:59.089 [2024-11-20 05:10:55.660695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:59.089 [2024-11-20 05:10:55.660749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.089 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.089 [2024-11-20 05:10:55.718317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.089 [2024-11-20 05:10:55.796781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:59.089 [2024-11-20 05:10:55.796885] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.089 [2024-11-20 05:10:55.796893] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.089 [2024-11-20 05:10:55.796899] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.089 [2024-11-20 05:10:55.796938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.089 [2024-11-20 05:10:55.797038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.089 [2024-11-20 05:10:55.797060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.089 [2024-11-20 05:10:55.797062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.030 05:10:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.030 05:10:56 -- common/autotest_common.sh@862 -- # return 0 00:13:00.030 05:10:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:00.030 05:10:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.030 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.030 05:10:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.030 05:10:56 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:00.030 05:10:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.030 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.030 05:10:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.030 05:10:56 -- target/rpc.sh@26 -- # stats='{ 00:13:00.030 "tick_rate": 2100000000, 00:13:00.030 "poll_groups": [ 00:13:00.030 { 00:13:00.030 "name": "nvmf_tgt_poll_group_0", 00:13:00.030 "admin_qpairs": 0, 00:13:00.030 "io_qpairs": 0, 00:13:00.030 "current_admin_qpairs": 0, 00:13:00.030 "current_io_qpairs": 0, 00:13:00.030 "pending_bdev_io": 0, 00:13:00.030 "completed_nvme_io": 0, 00:13:00.030 "transports": [] 00:13:00.030 }, 00:13:00.030 { 00:13:00.030 "name": "nvmf_tgt_poll_group_1", 00:13:00.030 "admin_qpairs": 0, 00:13:00.030 "io_qpairs": 0, 00:13:00.030 "current_admin_qpairs": 0, 00:13:00.030 "current_io_qpairs": 0, 00:13:00.030 "pending_bdev_io": 0, 00:13:00.030 "completed_nvme_io": 0, 00:13:00.030 "transports": [] 00:13:00.030 }, 00:13:00.030 { 00:13:00.030 "name": "nvmf_tgt_poll_group_2", 00:13:00.030 "admin_qpairs": 0, 00:13:00.030 "io_qpairs": 0, 00:13:00.030 "current_admin_qpairs": 0, 00:13:00.030 "current_io_qpairs": 0, 00:13:00.030 "pending_bdev_io": 0, 00:13:00.030 "completed_nvme_io": 0, 00:13:00.030 "transports": [] 00:13:00.030 }, 00:13:00.030 { 00:13:00.030 "name": "nvmf_tgt_poll_group_3", 00:13:00.030 "admin_qpairs": 0, 00:13:00.030 "io_qpairs": 0, 00:13:00.030 "current_admin_qpairs": 0, 00:13:00.030 "current_io_qpairs": 0, 00:13:00.030 "pending_bdev_io": 0, 00:13:00.030 "completed_nvme_io": 0, 00:13:00.030 "transports": [] 00:13:00.030 } 00:13:00.030 ] 00:13:00.030 }' 00:13:00.030 05:10:56 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:00.030 05:10:56 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:00.030 05:10:56 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:00.030 05:10:56 -- target/rpc.sh@15 -- # wc -l 00:13:00.030 05:10:56 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:00.030 05:10:56 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:00.030 05:10:56 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:00.030 05:10:56 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:00.030 05:10:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.030 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.030 [2024-11-20 05:10:56.655555] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1b91110/0x1b90750) succeed. 00:13:00.030 [2024-11-20 05:10:56.664397] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1b92480/0x1b90cd0) succeed. 00:13:00.030 [2024-11-20 05:10:56.664420] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:00.030 05:10:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.030 05:10:56 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:00.030 05:10:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.030 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.030 05:10:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.030 05:10:56 -- target/rpc.sh@33 -- # stats='{ 00:13:00.030 "tick_rate": 2100000000, 00:13:00.030 "poll_groups": [ 00:13:00.030 { 00:13:00.030 "name": "nvmf_tgt_poll_group_0", 00:13:00.030 "admin_qpairs": 0, 00:13:00.030 "io_qpairs": 0, 00:13:00.030 "current_admin_qpairs": 0, 00:13:00.030 "current_io_qpairs": 0, 00:13:00.030 "pending_bdev_io": 0, 00:13:00.030 "completed_nvme_io": 0, 00:13:00.030 "transports": [ 00:13:00.030 { 00:13:00.030 "trtype": "RDMA", 00:13:00.030 "pending_data_buffer": 0, 00:13:00.030 "devices": [ 00:13:00.030 { 00:13:00.030 "name": "rocep175s0f0", 00:13:00.030 "polls": 1638, 00:13:00.030 "idle_polls": 1638, 00:13:00.030 "completions": 0, 00:13:00.030 "requests": 0, 00:13:00.030 "request_latency": 0, 00:13:00.030 "pending_free_request": 0, 00:13:00.030 "pending_rdma_read": 0, 00:13:00.030 "pending_rdma_write": 0, 00:13:00.030 "pending_rdma_send": 0, 00:13:00.030 "total_send_wrs": 0, 00:13:00.030 "send_doorbell_updates": 0, 00:13:00.030 "total_recv_wrs": 0, 00:13:00.030 "recv_doorbell_updates": 0 00:13:00.030 }, 00:13:00.030 { 00:13:00.030 "name": "rocep175s0f1", 00:13:00.030 "polls": 1638, 00:13:00.030 "idle_polls": 1638, 00:13:00.030 "completions": 0, 00:13:00.030 "requests": 0, 00:13:00.030 "request_latency": 0, 00:13:00.030 "pending_free_request": 0, 00:13:00.030 "pending_rdma_read": 0, 00:13:00.030 "pending_rdma_write": 0, 00:13:00.030 "pending_rdma_send": 0, 00:13:00.030 "total_send_wrs": 0, 00:13:00.030 "send_doorbell_updates": 0, 00:13:00.030 "total_recv_wrs": 0, 00:13:00.030 "recv_doorbell_updates": 0 00:13:00.030 } 00:13:00.030 ] 00:13:00.030 } 00:13:00.030 ] 00:13:00.030 }, 00:13:00.030 { 00:13:00.030 "name": "nvmf_tgt_poll_group_1", 00:13:00.030 "admin_qpairs": 0, 00:13:00.030 "io_qpairs": 0, 00:13:00.030 "current_admin_qpairs": 0, 00:13:00.030 "current_io_qpairs": 0, 00:13:00.030 "pending_bdev_io": 0, 00:13:00.030 "completed_nvme_io": 0, 00:13:00.031 "transports": [ 00:13:00.031 { 00:13:00.031 "trtype": "RDMA", 00:13:00.031 "pending_data_buffer": 0, 00:13:00.031 "devices": [ 00:13:00.031 { 00:13:00.031 "name": "rocep175s0f0", 00:13:00.031 "polls": 1561, 00:13:00.031 "idle_polls": 1561, 00:13:00.031 "completions": 0, 00:13:00.031 "requests": 0, 00:13:00.031 "request_latency": 0, 00:13:00.031 "pending_free_request": 0, 00:13:00.031 "pending_rdma_read": 0, 00:13:00.031 "pending_rdma_write": 0, 00:13:00.031 "pending_rdma_send": 0, 00:13:00.031 "total_send_wrs": 0, 00:13:00.031 "send_doorbell_updates": 0, 00:13:00.031 "total_recv_wrs": 0, 00:13:00.031 "recv_doorbell_updates": 0 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "name": "rocep175s0f1", 00:13:00.031 "polls": 1561, 00:13:00.031 "idle_polls": 1561, 00:13:00.031 "completions": 0, 00:13:00.031 "requests": 0, 00:13:00.031 "request_latency": 0, 00:13:00.031 "pending_free_request": 0, 00:13:00.031 "pending_rdma_read": 0, 00:13:00.031 "pending_rdma_write": 0, 00:13:00.031 "pending_rdma_send": 0, 00:13:00.031 "total_send_wrs": 0, 00:13:00.031 "send_doorbell_updates": 0, 00:13:00.031 "total_recv_wrs": 0, 00:13:00.031 "recv_doorbell_updates": 0 00:13:00.031 } 00:13:00.031 ] 00:13:00.031 } 00:13:00.031 ] 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "name": "nvmf_tgt_poll_group_2", 00:13:00.031 "admin_qpairs": 0, 00:13:00.031 "io_qpairs": 0, 00:13:00.031 "current_admin_qpairs": 0, 00:13:00.031 "current_io_qpairs": 0, 00:13:00.031 "pending_bdev_io": 0, 00:13:00.031 "completed_nvme_io": 0, 00:13:00.031 "transports": [ 00:13:00.031 { 00:13:00.031 "trtype": "RDMA", 00:13:00.031 "pending_data_buffer": 0, 00:13:00.031 "devices": [ 00:13:00.031 { 00:13:00.031 "name": "rocep175s0f0", 00:13:00.031 "polls": 1479, 00:13:00.031 "idle_polls": 1479, 00:13:00.031 "completions": 0, 00:13:00.031 "requests": 0, 00:13:00.031 "request_latency": 0, 00:13:00.031 "pending_free_request": 0, 00:13:00.031 "pending_rdma_read": 0, 00:13:00.031 "pending_rdma_write": 0, 00:13:00.031 "pending_rdma_send": 0, 00:13:00.031 "total_send_wrs": 0, 00:13:00.031 "send_doorbell_updates": 0, 00:13:00.031 "total_recv_wrs": 0, 00:13:00.031 "recv_doorbell_updates": 0 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "name": "rocep175s0f1", 00:13:00.031 "polls": 1479, 00:13:00.031 "idle_polls": 1479, 00:13:00.031 "completions": 0, 00:13:00.031 "requests": 0, 00:13:00.031 "request_latency": 0, 00:13:00.031 "pending_free_request": 0, 00:13:00.031 "pending_rdma_read": 0, 00:13:00.031 "pending_rdma_write": 0, 00:13:00.031 "pending_rdma_send": 0, 00:13:00.031 "total_send_wrs": 0, 00:13:00.031 "send_doorbell_updates": 0, 00:13:00.031 "total_recv_wrs": 0, 00:13:00.031 "recv_doorbell_updates": 0 00:13:00.031 } 00:13:00.031 ] 00:13:00.031 } 00:13:00.031 ] 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "name": "nvmf_tgt_poll_group_3", 00:13:00.031 "admin_qpairs": 0, 00:13:00.031 "io_qpairs": 0, 00:13:00.031 "current_admin_qpairs": 0, 00:13:00.031 "current_io_qpairs": 0, 00:13:00.031 "pending_bdev_io": 0, 00:13:00.031 "completed_nvme_io": 0, 00:13:00.031 "transports": [ 00:13:00.031 { 00:13:00.031 "trtype": "RDMA", 00:13:00.031 "pending_data_buffer": 0, 00:13:00.031 "devices": [ 00:13:00.031 { 00:13:00.031 "name": "rocep175s0f0", 00:13:00.031 "polls": 1026, 00:13:00.031 "idle_polls": 1026, 00:13:00.031 "completions": 0, 00:13:00.031 "requests": 0, 00:13:00.031 "request_latency": 0, 00:13:00.031 "pending_free_request": 0, 00:13:00.031 "pending_rdma_read": 0, 00:13:00.031 "pending_rdma_write": 0, 00:13:00.031 "pending_rdma_send": 0, 00:13:00.031 "total_send_wrs": 0, 00:13:00.031 "send_doorbell_updates": 0, 00:13:00.031 "total_recv_wrs": 0, 00:13:00.031 "recv_doorbell_updates": 0 00:13:00.031 }, 00:13:00.031 { 00:13:00.031 "name": "rocep175s0f1", 00:13:00.031 "polls": 1026, 00:13:00.031 "idle_polls": 1026, 00:13:00.031 "completions": 0, 00:13:00.031 "requests": 0, 00:13:00.031 "request_latency": 0, 00:13:00.031 "pending_free_request": 0, 00:13:00.031 "pending_rdma_read": 0, 00:13:00.031 "pending_rdma_write": 0, 00:13:00.031 "pending_rdma_send": 0, 00:13:00.031 "total_send_wrs": 0, 00:13:00.031 "send_doorbell_updates": 0, 00:13:00.031 "total_recv_wrs": 0, 00:13:00.031 "recv_doorbell_updates": 0 00:13:00.031 } 00:13:00.031 ] 00:13:00.031 } 00:13:00.031 ] 00:13:00.031 } 00:13:00.031 ] 00:13:00.031 }' 00:13:00.031 05:10:56 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.031 05:10:56 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.031 05:10:56 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.031 05:10:56 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.031 05:10:56 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:00.031 05:10:56 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.031 05:10:56 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.031 05:10:56 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.031 05:10:56 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.031 05:10:56 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:00.031 05:10:56 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:13:00.031 05:10:56 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:13:00.031 05:10:56 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:13:00.031 05:10:56 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:13:00.031 05:10:56 -- target/rpc.sh@15 -- # wc -l 00:13:00.031 05:10:56 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:13:00.031 05:10:56 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:13:00.290 05:10:56 -- target/rpc.sh@41 -- # transport_type=RDMA 00:13:00.290 05:10:56 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:13:00.290 05:10:56 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:13:00.290 05:10:56 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:13:00.290 05:10:56 -- target/rpc.sh@15 -- # wc -l 00:13:00.290 05:10:56 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:13:00.290 05:10:56 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:13:00.290 05:10:56 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:00.290 05:10:56 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:00.290 05:10:56 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:00.290 05:10:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.290 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.290 Malloc1 00:13:00.290 05:10:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.290 05:10:56 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.290 05:10:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.290 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.290 05:10:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.290 05:10:56 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.290 05:10:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.290 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.290 05:10:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.290 05:10:56 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:00.290 05:10:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.290 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.290 05:10:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.290 05:10:56 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:00.290 05:10:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.291 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:00.291 [2024-11-20 05:10:56.980481] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:00.291 05:10:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.291 05:10:56 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:13:00.291 05:10:56 -- common/autotest_common.sh@650 -- # local es=0 00:13:00.291 05:10:56 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:13:00.291 05:10:56 -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:00.291 05:10:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.291 05:10:56 -- common/autotest_common.sh@642 -- # type -t nvme 00:13:00.291 05:10:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.291 05:10:56 -- common/autotest_common.sh@644 -- # type -P nvme 00:13:00.291 05:10:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.291 05:10:56 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:00.291 05:10:56 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:00.291 05:10:56 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:13:00.291 [2024-11-20 05:10:57.017241] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:13:00.291 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:00.291 could not add new controller: failed to write to nvme-fabrics device 00:13:00.291 05:10:57 -- common/autotest_common.sh@653 -- # es=1 00:13:00.291 05:10:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:00.291 05:10:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:00.291 05:10:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:00.291 05:10:57 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:00.291 05:10:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.291 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:13:00.291 05:10:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.291 05:10:57 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:00.550 05:10:57 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.550 05:10:57 -- common/autotest_common.sh@1187 -- # local i=0 00:13:00.550 05:10:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.550 05:10:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:00.550 05:10:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:03.095 05:10:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:03.095 05:10:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:03.095 05:10:59 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.095 05:10:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:03.095 05:10:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.095 05:10:59 -- common/autotest_common.sh@1197 -- # return 0 00:13:03.095 05:10:59 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.665 05:11:00 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.665 05:11:00 -- common/autotest_common.sh@1208 -- # local i=0 00:13:03.665 05:11:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:03.665 05:11:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.665 05:11:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:03.665 05:11:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.665 05:11:00 -- common/autotest_common.sh@1220 -- # return 0 00:13:03.665 05:11:00 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:03.665 05:11:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.665 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:13:03.666 05:11:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.666 05:11:00 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:03.666 05:11:00 -- common/autotest_common.sh@650 -- # local es=0 00:13:03.666 05:11:00 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:03.666 05:11:00 -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:03.666 05:11:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.666 05:11:00 -- common/autotest_common.sh@642 -- # type -t nvme 00:13:03.666 05:11:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.666 05:11:00 -- common/autotest_common.sh@644 -- # type -P nvme 00:13:03.666 05:11:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.666 05:11:00 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:03.666 05:11:00 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.666 05:11:00 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:03.666 [2024-11-20 05:11:00.287086] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:13:03.666 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.666 could not add new controller: failed to write to nvme-fabrics device 00:13:03.666 05:11:00 -- common/autotest_common.sh@653 -- # es=1 00:13:03.666 05:11:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:03.666 05:11:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:03.666 05:11:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:03.666 05:11:00 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:03.666 05:11:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.666 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:13:03.666 05:11:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.666 05:11:00 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:03.925 05:11:00 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.925 05:11:00 -- common/autotest_common.sh@1187 -- # local i=0 00:13:03.925 05:11:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.925 05:11:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:03.925 05:11:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:05.833 05:11:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:05.833 05:11:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:05.833 05:11:02 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.833 05:11:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:05.833 05:11:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.833 05:11:02 -- common/autotest_common.sh@1197 -- # return 0 00:13:05.833 05:11:02 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.772 05:11:03 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.772 05:11:03 -- common/autotest_common.sh@1208 -- # local i=0 00:13:06.772 05:11:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:06.772 05:11:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.772 05:11:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:06.772 05:11:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.772 05:11:03 -- common/autotest_common.sh@1220 -- # return 0 00:13:06.772 05:11:03 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.772 05:11:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.772 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.772 05:11:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.772 05:11:03 -- target/rpc.sh@81 -- # seq 1 5 00:13:06.772 05:11:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.772 05:11:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.772 05:11:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.772 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.772 05:11:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.772 05:11:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:06.772 05:11:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.772 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.772 [2024-11-20 05:11:03.550270] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:06.772 05:11:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.772 05:11:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.772 05:11:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.772 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.772 05:11:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.772 05:11:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.772 05:11:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.772 05:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.772 05:11:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.772 05:11:03 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:07.032 05:11:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.032 05:11:03 -- common/autotest_common.sh@1187 -- # local i=0 00:13:07.032 05:11:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.032 05:11:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:07.032 05:11:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:09.569 05:11:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:09.569 05:11:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:09.569 05:11:05 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.569 05:11:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:09.569 05:11:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.569 05:11:05 -- common/autotest_common.sh@1197 -- # return 0 00:13:09.569 05:11:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.140 05:11:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.140 05:11:06 -- common/autotest_common.sh@1208 -- # local i=0 00:13:10.140 05:11:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:10.140 05:11:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.140 05:11:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:10.140 05:11:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.140 05:11:06 -- common/autotest_common.sh@1220 -- # return 0 00:13:10.140 05:11:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.140 05:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.140 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:10.140 05:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.140 05:11:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.140 05:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.140 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:10.140 05:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.140 05:11:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.140 05:11:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.140 05:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.140 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:10.140 05:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.140 05:11:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:10.140 05:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.140 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:10.140 [2024-11-20 05:11:06.727789] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:10.140 05:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.140 05:11:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.140 05:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.140 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:10.140 05:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.140 05:11:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.140 05:11:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.140 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:10.140 05:11:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.140 05:11:06 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:10.401 05:11:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.401 05:11:06 -- common/autotest_common.sh@1187 -- # local i=0 00:13:10.401 05:11:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.401 05:11:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:10.401 05:11:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:12.312 05:11:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:12.312 05:11:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:12.312 05:11:08 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.312 05:11:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:12.312 05:11:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.312 05:11:08 -- common/autotest_common.sh@1197 -- # return 0 00:13:12.312 05:11:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.252 05:11:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.252 05:11:09 -- common/autotest_common.sh@1208 -- # local i=0 00:13:13.252 05:11:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:13.252 05:11:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.252 05:11:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:13.252 05:11:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.252 05:11:09 -- common/autotest_common.sh@1220 -- # return 0 00:13:13.252 05:11:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.252 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.252 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:13.252 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.252 05:11:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.252 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.252 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:13.252 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.252 05:11:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:13.252 05:11:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.252 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.252 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:13.252 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.252 05:11:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:13.252 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.252 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:13.252 [2024-11-20 05:11:09.914928] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:13.252 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.252 05:11:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:13.252 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.252 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:13.252 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.252 05:11:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.252 05:11:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.252 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:13:13.252 05:11:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.252 05:11:09 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:13.512 05:11:10 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.512 05:11:10 -- common/autotest_common.sh@1187 -- # local i=0 00:13:13.512 05:11:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.512 05:11:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:13.512 05:11:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:15.422 05:11:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:15.422 05:11:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:15.422 05:11:12 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.422 05:11:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:15.422 05:11:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.422 05:11:12 -- common/autotest_common.sh@1197 -- # return 0 00:13:15.422 05:11:12 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.413 05:11:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.413 05:11:13 -- common/autotest_common.sh@1208 -- # local i=0 00:13:16.413 05:11:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:16.413 05:11:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.413 05:11:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:16.413 05:11:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.413 05:11:13 -- common/autotest_common.sh@1220 -- # return 0 00:13:16.413 05:11:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.413 05:11:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.413 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.413 05:11:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.413 05:11:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.413 05:11:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.413 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.413 05:11:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.413 05:11:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.413 05:11:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.413 05:11:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.413 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.413 05:11:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.413 05:11:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:16.413 05:11:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.413 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.413 [2024-11-20 05:11:13.108745] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:16.413 05:11:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.413 05:11:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.413 05:11:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.413 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.413 05:11:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.413 05:11:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.413 05:11:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.413 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.413 05:11:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.413 05:11:13 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:16.673 05:11:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.673 05:11:13 -- common/autotest_common.sh@1187 -- # local i=0 00:13:16.673 05:11:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.673 05:11:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:16.673 05:11:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:18.582 05:11:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:18.582 05:11:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:18.582 05:11:15 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.582 05:11:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:18.582 05:11:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.582 05:11:15 -- common/autotest_common.sh@1197 -- # return 0 00:13:18.582 05:11:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.521 05:11:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.521 05:11:16 -- common/autotest_common.sh@1208 -- # local i=0 00:13:19.521 05:11:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:19.521 05:11:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.521 05:11:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:19.521 05:11:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.521 05:11:16 -- common/autotest_common.sh@1220 -- # return 0 00:13:19.521 05:11:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.521 05:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.521 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:13:19.521 05:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.521 05:11:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.521 05:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.521 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:13:19.521 05:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.521 05:11:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.521 05:11:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.521 05:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.521 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:13:19.521 05:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.521 05:11:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:19.521 05:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.521 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:13:19.521 [2024-11-20 05:11:16.298355] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:19.521 05:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.521 05:11:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.521 05:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.521 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:13:19.521 05:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.521 05:11:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.521 05:11:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.521 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:13:19.521 05:11:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.521 05:11:16 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:19.781 05:11:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.781 05:11:16 -- common/autotest_common.sh@1187 -- # local i=0 00:13:19.781 05:11:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.781 05:11:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:19.781 05:11:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:22.323 05:11:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:22.323 05:11:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:22.323 05:11:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.323 05:11:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:22.323 05:11:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.323 05:11:18 -- common/autotest_common.sh@1197 -- # return 0 00:13:22.323 05:11:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.891 05:11:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.891 05:11:19 -- common/autotest_common.sh@1208 -- # local i=0 00:13:22.891 05:11:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:22.891 05:11:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.891 05:11:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:22.891 05:11:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.891 05:11:19 -- common/autotest_common.sh@1220 -- # return 0 00:13:22.891 05:11:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@99 -- # seq 1 5 00:13:22.891 05:11:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.891 05:11:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 [2024-11-20 05:11:19.484017] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.891 05:11:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 [2024-11-20 05:11:19.536197] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.891 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.891 05:11:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.891 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.892 05:11:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 [2024-11-20 05:11:19.584393] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.892 05:11:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 [2024-11-20 05:11:19.632569] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.892 05:11:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 [2024-11-20 05:11:19.684776] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:22.892 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.892 05:11:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.892 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.892 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:23.152 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.152 05:11:19 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:23.152 05:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.152 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:23.152 05:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.152 05:11:19 -- target/rpc.sh@110 -- # stats='{ 00:13:23.152 "tick_rate": 2100000000, 00:13:23.152 "poll_groups": [ 00:13:23.152 { 00:13:23.152 "name": "nvmf_tgt_poll_group_0", 00:13:23.152 "admin_qpairs": 2, 00:13:23.152 "io_qpairs": 27, 00:13:23.152 "current_admin_qpairs": 0, 00:13:23.152 "current_io_qpairs": 0, 00:13:23.152 "pending_bdev_io": 0, 00:13:23.152 "completed_nvme_io": 121, 00:13:23.152 "transports": [ 00:13:23.152 { 00:13:23.152 "trtype": "RDMA", 00:13:23.152 "pending_data_buffer": 0, 00:13:23.152 "devices": [ 00:13:23.152 { 00:13:23.152 "name": "rocep175s0f0", 00:13:23.152 "polls": 2677418, 00:13:23.152 "idle_polls": 2676996, 00:13:23.152 "completions": 3895, 00:13:23.152 "requests": 3722, 00:13:23.152 "request_latency": 461866468, 00:13:23.152 "pending_free_request": 0, 00:13:23.152 "pending_rdma_read": 0, 00:13:23.152 "pending_rdma_write": 0, 00:13:23.152 "pending_rdma_send": 0, 00:13:23.152 "total_send_wrs": 291, 00:13:23.152 "send_doorbell_updates": 149, 00:13:23.152 "total_recv_wrs": 3722, 00:13:23.152 "recv_doorbell_updates": 176 00:13:23.152 }, 00:13:23.152 { 00:13:23.152 "name": "rocep175s0f1", 00:13:23.152 "polls": 2677307, 00:13:23.152 "idle_polls": 2677307, 00:13:23.152 "completions": 0, 00:13:23.152 "requests": 0, 00:13:23.152 "request_latency": 0, 00:13:23.152 "pending_free_request": 0, 00:13:23.152 "pending_rdma_read": 0, 00:13:23.152 "pending_rdma_write": 0, 00:13:23.152 "pending_rdma_send": 0, 00:13:23.152 "total_send_wrs": 0, 00:13:23.152 "send_doorbell_updates": 0, 00:13:23.152 "total_recv_wrs": 0, 00:13:23.152 "recv_doorbell_updates": 0 00:13:23.152 } 00:13:23.152 ] 00:13:23.152 } 00:13:23.152 ] 00:13:23.152 }, 00:13:23.152 { 00:13:23.152 "name": "nvmf_tgt_poll_group_1", 00:13:23.152 "admin_qpairs": 2, 00:13:23.152 "io_qpairs": 26, 00:13:23.152 "current_admin_qpairs": 0, 00:13:23.152 "current_io_qpairs": 0, 00:13:23.152 "pending_bdev_io": 0, 00:13:23.152 "completed_nvme_io": 80, 00:13:23.152 "transports": [ 00:13:23.152 { 00:13:23.153 "trtype": "RDMA", 00:13:23.153 "pending_data_buffer": 0, 00:13:23.153 "devices": [ 00:13:23.153 { 00:13:23.153 "name": "rocep175s0f0", 00:13:23.153 "polls": 2702084, 00:13:23.153 "idle_polls": 2701734, 00:13:23.153 "completions": 3650, 00:13:23.153 "requests": 3520, 00:13:23.153 "request_latency": 428310706, 00:13:23.153 "pending_free_request": 0, 00:13:23.153 "pending_rdma_read": 0, 00:13:23.153 "pending_rdma_write": 0, 00:13:23.153 "pending_rdma_send": 0, 00:13:23.153 "total_send_wrs": 208, 00:13:23.153 "send_doorbell_updates": 118, 00:13:23.153 "total_recv_wrs": 3520, 00:13:23.153 "recv_doorbell_updates": 144 00:13:23.153 }, 00:13:23.153 { 00:13:23.153 "name": "rocep175s0f1", 00:13:23.153 "polls": 2701978, 00:13:23.153 "idle_polls": 2701978, 00:13:23.153 "completions": 0, 00:13:23.153 "requests": 0, 00:13:23.153 "request_latency": 0, 00:13:23.153 "pending_free_request": 0, 00:13:23.153 "pending_rdma_read": 0, 00:13:23.153 "pending_rdma_write": 0, 00:13:23.153 "pending_rdma_send": 0, 00:13:23.153 "total_send_wrs": 0, 00:13:23.153 "send_doorbell_updates": 0, 00:13:23.153 "total_recv_wrs": 0, 00:13:23.153 "recv_doorbell_updates": 0 00:13:23.153 } 00:13:23.153 ] 00:13:23.153 } 00:13:23.153 ] 00:13:23.153 }, 00:13:23.153 { 00:13:23.153 "name": "nvmf_tgt_poll_group_2", 00:13:23.153 "admin_qpairs": 1, 00:13:23.153 "io_qpairs": 26, 00:13:23.153 "current_admin_qpairs": 0, 00:13:23.153 "current_io_qpairs": 0, 00:13:23.153 "pending_bdev_io": 0, 00:13:23.153 "completed_nvme_io": 127, 00:13:23.153 "transports": [ 00:13:23.153 { 00:13:23.153 "trtype": "RDMA", 00:13:23.153 "pending_data_buffer": 0, 00:13:23.153 "devices": [ 00:13:23.153 { 00:13:23.153 "name": "rocep175s0f0", 00:13:23.153 "polls": 2686872, 00:13:23.153 "idle_polls": 2686493, 00:13:23.153 "completions": 3698, 00:13:23.153 "requests": 3544, 00:13:23.153 "request_latency": 440280846, 00:13:23.153 "pending_free_request": 0, 00:13:23.153 "pending_rdma_read": 0, 00:13:23.153 "pending_rdma_write": 0, 00:13:23.153 "pending_rdma_send": 0, 00:13:23.153 "total_send_wrs": 267, 00:13:23.153 "send_doorbell_updates": 130, 00:13:23.153 "total_recv_wrs": 3544, 00:13:23.153 "recv_doorbell_updates": 156 00:13:23.153 }, 00:13:23.153 { 00:13:23.153 "name": "rocep175s0f1", 00:13:23.153 "polls": 2686766, 00:13:23.153 "idle_polls": 2686766, 00:13:23.153 "completions": 0, 00:13:23.153 "requests": 0, 00:13:23.153 "request_latency": 0, 00:13:23.153 "pending_free_request": 0, 00:13:23.153 "pending_rdma_read": 0, 00:13:23.153 "pending_rdma_write": 0, 00:13:23.153 "pending_rdma_send": 0, 00:13:23.153 "total_send_wrs": 0, 00:13:23.153 "send_doorbell_updates": 0, 00:13:23.153 "total_recv_wrs": 0, 00:13:23.153 "recv_doorbell_updates": 0 00:13:23.153 } 00:13:23.153 ] 00:13:23.153 } 00:13:23.153 ] 00:13:23.153 }, 00:13:23.153 { 00:13:23.153 "name": "nvmf_tgt_poll_group_3", 00:13:23.153 "admin_qpairs": 2, 00:13:23.153 "io_qpairs": 26, 00:13:23.153 "current_admin_qpairs": 0, 00:13:23.153 "current_io_qpairs": 0, 00:13:23.153 "pending_bdev_io": 0, 00:13:23.153 "completed_nvme_io": 127, 00:13:23.153 "transports": [ 00:13:23.153 { 00:13:23.153 "trtype": "RDMA", 00:13:23.153 "pending_data_buffer": 0, 00:13:23.153 "devices": [ 00:13:23.153 { 00:13:23.153 "name": "rocep175s0f0", 00:13:23.153 "polls": 2102496, 00:13:23.153 "idle_polls": 2102072, 00:13:23.153 "completions": 3746, 00:13:23.153 "requests": 3568, 00:13:23.153 "request_latency": 448664366, 00:13:23.153 "pending_free_request": 0, 00:13:23.153 "pending_rdma_read": 0, 00:13:23.153 "pending_rdma_write": 0, 00:13:23.153 "pending_rdma_send": 0, 00:13:23.153 "total_send_wrs": 304, 00:13:23.153 "send_doorbell_updates": 152, 00:13:23.153 "total_recv_wrs": 3568, 00:13:23.153 "recv_doorbell_updates": 178 00:13:23.153 }, 00:13:23.153 { 00:13:23.153 "name": "rocep175s0f1", 00:13:23.153 "polls": 2102390, 00:13:23.153 "idle_polls": 2102390, 00:13:23.153 "completions": 0, 00:13:23.153 "requests": 0, 00:13:23.153 "request_latency": 0, 00:13:23.153 "pending_free_request": 0, 00:13:23.153 "pending_rdma_read": 0, 00:13:23.153 "pending_rdma_write": 0, 00:13:23.153 "pending_rdma_send": 0, 00:13:23.153 "total_send_wrs": 0, 00:13:23.153 "send_doorbell_updates": 0, 00:13:23.153 "total_recv_wrs": 0, 00:13:23.153 "recv_doorbell_updates": 0 00:13:23.153 } 00:13:23.153 ] 00:13:23.153 } 00:13:23.153 ] 00:13:23.153 } 00:13:23.153 ] 00:13:23.153 }' 00:13:23.153 05:11:19 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.153 05:11:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.153 05:11:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.153 05:11:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.153 05:11:19 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:23.153 05:11:19 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.153 05:11:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.153 05:11:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.153 05:11:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.153 05:11:19 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:13:23.153 05:11:19 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:13:23.153 05:11:19 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:13:23.153 05:11:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:13:23.153 05:11:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:13:23.153 05:11:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.153 05:11:19 -- target/rpc.sh@117 -- # (( 14989 > 0 )) 00:13:23.153 05:11:19 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:13:23.153 05:11:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:13:23.153 05:11:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.153 05:11:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:13:23.153 05:11:19 -- target/rpc.sh@118 -- # (( 1779122386 > 0 )) 00:13:23.153 05:11:19 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:23.153 05:11:19 -- target/rpc.sh@123 -- # nvmftestfini 00:13:23.153 05:11:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:23.153 05:11:19 -- nvmf/common.sh@116 -- # sync 00:13:23.153 05:11:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:13:23.153 05:11:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:13:23.153 05:11:19 -- nvmf/common.sh@119 -- # set +e 00:13:23.153 05:11:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:23.153 05:11:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:13:23.153 rmmod nvme_rdma 00:13:23.153 rmmod nvme_fabrics 00:13:23.413 05:11:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:23.413 05:11:19 -- nvmf/common.sh@123 -- # set -e 00:13:23.413 05:11:19 -- nvmf/common.sh@124 -- # return 0 00:13:23.413 05:11:19 -- nvmf/common.sh@477 -- # '[' -n 204767 ']' 00:13:23.413 05:11:19 -- nvmf/common.sh@478 -- # killprocess 204767 00:13:23.413 05:11:19 -- common/autotest_common.sh@936 -- # '[' -z 204767 ']' 00:13:23.413 05:11:19 -- common/autotest_common.sh@940 -- # kill -0 204767 00:13:23.413 05:11:19 -- common/autotest_common.sh@941 -- # uname 00:13:23.413 05:11:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.413 05:11:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 204767 00:13:23.413 05:11:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.413 05:11:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.413 05:11:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 204767' 00:13:23.413 killing process with pid 204767 00:13:23.413 05:11:20 -- common/autotest_common.sh@955 -- # kill 204767 00:13:23.413 05:11:20 -- common/autotest_common.sh@960 -- # wait 204767 00:13:23.672 05:11:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:23.672 05:11:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:13:23.672 00:13:23.672 real 0m30.175s 00:13:23.672 user 1m39.937s 00:13:23.672 sys 0m5.607s 00:13:23.672 05:11:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:23.672 05:11:20 -- common/autotest_common.sh@10 -- # set +x 00:13:23.672 ************************************ 00:13:23.672 END TEST nvmf_rpc 00:13:23.672 ************************************ 00:13:23.672 05:11:20 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:23.672 05:11:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:23.672 05:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.672 05:11:20 -- common/autotest_common.sh@10 -- # set +x 00:13:23.672 ************************************ 00:13:23.672 START TEST nvmf_invalid 00:13:23.672 ************************************ 00:13:23.672 05:11:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:23.672 * Looking for test storage... 00:13:23.672 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:13:23.672 05:11:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:23.672 05:11:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:23.672 05:11:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:23.672 05:11:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:23.672 05:11:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:23.672 05:11:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:23.672 05:11:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:23.672 05:11:20 -- scripts/common.sh@335 -- # IFS=.-: 00:13:23.672 05:11:20 -- scripts/common.sh@335 -- # read -ra ver1 00:13:23.672 05:11:20 -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.672 05:11:20 -- scripts/common.sh@336 -- # read -ra ver2 00:13:23.672 05:11:20 -- scripts/common.sh@337 -- # local 'op=<' 00:13:23.672 05:11:20 -- scripts/common.sh@339 -- # ver1_l=2 00:13:23.672 05:11:20 -- scripts/common.sh@340 -- # ver2_l=1 00:13:23.672 05:11:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:23.672 05:11:20 -- scripts/common.sh@343 -- # case "$op" in 00:13:23.672 05:11:20 -- scripts/common.sh@344 -- # : 1 00:13:23.672 05:11:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:23.672 05:11:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.672 05:11:20 -- scripts/common.sh@364 -- # decimal 1 00:13:23.672 05:11:20 -- scripts/common.sh@352 -- # local d=1 00:13:23.672 05:11:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.672 05:11:20 -- scripts/common.sh@354 -- # echo 1 00:13:23.672 05:11:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:23.672 05:11:20 -- scripts/common.sh@365 -- # decimal 2 00:13:23.672 05:11:20 -- scripts/common.sh@352 -- # local d=2 00:13:23.672 05:11:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.672 05:11:20 -- scripts/common.sh@354 -- # echo 2 00:13:23.672 05:11:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:23.672 05:11:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:23.672 05:11:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:23.672 05:11:20 -- scripts/common.sh@367 -- # return 0 00:13:23.673 05:11:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.673 05:11:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:23.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.673 --rc genhtml_branch_coverage=1 00:13:23.673 --rc genhtml_function_coverage=1 00:13:23.673 --rc genhtml_legend=1 00:13:23.673 --rc geninfo_all_blocks=1 00:13:23.673 --rc geninfo_unexecuted_blocks=1 00:13:23.673 00:13:23.673 ' 00:13:23.673 05:11:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:23.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.673 --rc genhtml_branch_coverage=1 00:13:23.673 --rc genhtml_function_coverage=1 00:13:23.673 --rc genhtml_legend=1 00:13:23.673 --rc geninfo_all_blocks=1 00:13:23.673 --rc geninfo_unexecuted_blocks=1 00:13:23.673 00:13:23.673 ' 00:13:23.673 05:11:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:23.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.673 --rc genhtml_branch_coverage=1 00:13:23.673 --rc genhtml_function_coverage=1 00:13:23.673 --rc genhtml_legend=1 00:13:23.673 --rc geninfo_all_blocks=1 00:13:23.673 --rc geninfo_unexecuted_blocks=1 00:13:23.673 00:13:23.673 ' 00:13:23.673 05:11:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:23.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.673 --rc genhtml_branch_coverage=1 00:13:23.673 --rc genhtml_function_coverage=1 00:13:23.673 --rc genhtml_legend=1 00:13:23.673 --rc geninfo_all_blocks=1 00:13:23.673 --rc geninfo_unexecuted_blocks=1 00:13:23.673 00:13:23.673 ' 00:13:23.673 05:11:20 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.673 05:11:20 -- nvmf/common.sh@7 -- # uname -s 00:13:23.932 05:11:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.932 05:11:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.932 05:11:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.932 05:11:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.932 05:11:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.932 05:11:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.932 05:11:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.932 05:11:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.932 05:11:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.932 05:11:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.933 05:11:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:23.933 05:11:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:23.933 05:11:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.933 05:11:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.933 05:11:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:23.933 05:11:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:13:23.933 05:11:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.933 05:11:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.933 05:11:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.933 05:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.933 05:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.933 05:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.933 05:11:20 -- paths/export.sh@5 -- # export PATH 00:13:23.933 05:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.933 05:11:20 -- nvmf/common.sh@46 -- # : 0 00:13:23.933 05:11:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:23.933 05:11:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:23.933 05:11:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:23.933 05:11:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.933 05:11:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.933 05:11:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:23.933 05:11:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:23.933 05:11:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:23.933 05:11:20 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:23.933 05:11:20 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:13:23.933 05:11:20 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:23.933 05:11:20 -- target/invalid.sh@14 -- # target=foobar 00:13:23.933 05:11:20 -- target/invalid.sh@16 -- # RANDOM=0 00:13:23.933 05:11:20 -- target/invalid.sh@34 -- # nvmftestinit 00:13:23.933 05:11:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:13:23.933 05:11:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.933 05:11:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:23.933 05:11:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:23.933 05:11:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:23.933 05:11:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.933 05:11:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.933 05:11:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.933 05:11:20 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:23.933 05:11:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:23.933 05:11:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:23.933 05:11:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.213 05:11:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:29.213 05:11:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:29.213 05:11:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:29.213 05:11:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:29.213 05:11:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:29.213 05:11:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:29.213 05:11:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:29.213 05:11:25 -- nvmf/common.sh@294 -- # net_devs=() 00:13:29.213 05:11:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:29.213 05:11:25 -- nvmf/common.sh@295 -- # e810=() 00:13:29.213 05:11:25 -- nvmf/common.sh@295 -- # local -ga e810 00:13:29.213 05:11:25 -- nvmf/common.sh@296 -- # x722=() 00:13:29.213 05:11:25 -- nvmf/common.sh@296 -- # local -ga x722 00:13:29.213 05:11:25 -- nvmf/common.sh@297 -- # mlx=() 00:13:29.213 05:11:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:29.213 05:11:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.213 05:11:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:29.213 05:11:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:13:29.213 05:11:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:13:29.213 05:11:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:13:29.213 05:11:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:29.213 05:11:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:29.213 05:11:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:29.214 05:11:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:29.214 05:11:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:29.214 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:29.214 05:11:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:13:29.214 05:11:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:29.214 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:29.214 05:11:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:13:29.214 05:11:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:29.214 05:11:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:13:29.214 05:11:25 -- nvmf/common.sh@376 -- # modinfo irdma 00:13:29.214 05:11:25 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:13:29.214 05:11:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.214 05:11:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:29.214 05:11:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.214 05:11:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:29.214 Found net devices under 0000:af:00.0: cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.214 05:11:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.214 05:11:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:29.214 05:11:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.214 05:11:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:29.214 Found net devices under 0000:af:00.1: cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.214 05:11:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:29.214 05:11:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:29.214 05:11:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:13:29.214 05:11:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:13:29.214 05:11:25 -- nvmf/common.sh@57 -- # uname 00:13:29.214 05:11:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:13:29.214 05:11:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:13:29.214 05:11:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:13:29.214 05:11:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:13:29.214 05:11:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:13:29.214 05:11:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:13:29.214 05:11:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:13:29.214 05:11:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:13:29.214 05:11:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:13:29.214 05:11:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:29.214 05:11:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:13:29.214 05:11:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:29.214 05:11:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:13:29.214 05:11:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:13:29.214 05:11:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:29.214 05:11:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:13:29.214 05:11:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@104 -- # continue 2 00:13:29.214 05:11:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@104 -- # continue 2 00:13:29.214 05:11:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:13:29.214 05:11:25 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:13:29.214 05:11:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:13:29.214 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:29.214 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:29.214 altname enp175s0f0np0 00:13:29.214 altname ens801f0np0 00:13:29.214 inet 192.168.100.8/24 scope global cvl_0_0 00:13:29.214 valid_lft forever preferred_lft forever 00:13:29.214 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:29.214 valid_lft forever preferred_lft forever 00:13:29.214 05:11:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:13:29.214 05:11:25 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:29.214 05:11:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:13:29.214 05:11:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:13:29.214 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:29.214 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:29.214 altname enp175s0f1np1 00:13:29.214 altname ens801f1np1 00:13:29.214 inet 192.168.100.9/24 scope global cvl_0_1 00:13:29.214 valid_lft forever preferred_lft forever 00:13:29.214 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:29.214 valid_lft forever preferred_lft forever 00:13:29.214 05:11:25 -- nvmf/common.sh@410 -- # return 0 00:13:29.214 05:11:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:29.214 05:11:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:29.214 05:11:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:13:29.214 05:11:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:13:29.214 05:11:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:29.214 05:11:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:13:29.214 05:11:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:13:29.214 05:11:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:29.214 05:11:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:13:29.214 05:11:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@104 -- # continue 2 00:13:29.214 05:11:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.214 05:11:25 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:29.214 05:11:25 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@104 -- # continue 2 00:13:29.214 05:11:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:13:29.214 05:11:25 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:29.214 05:11:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:13:29.214 05:11:25 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:29.214 05:11:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:29.214 05:11:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:13:29.214 192.168.100.9' 00:13:29.214 05:11:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:13:29.214 192.168.100.9' 00:13:29.214 05:11:25 -- nvmf/common.sh@445 -- # head -n 1 00:13:29.214 05:11:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:29.214 05:11:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:29.214 192.168.100.9' 00:13:29.214 05:11:25 -- nvmf/common.sh@446 -- # tail -n +2 00:13:29.214 05:11:25 -- nvmf/common.sh@446 -- # head -n 1 00:13:29.214 05:11:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:29.214 05:11:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:13:29.214 05:11:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:29.214 05:11:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:13:29.214 05:11:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:13:29.215 05:11:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:13:29.215 05:11:25 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:29.215 05:11:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:29.215 05:11:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.215 05:11:25 -- common/autotest_common.sh@10 -- # set +x 00:13:29.215 05:11:25 -- nvmf/common.sh@469 -- # nvmfpid=212026 00:13:29.215 05:11:25 -- nvmf/common.sh@470 -- # waitforlisten 212026 00:13:29.215 05:11:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.215 05:11:25 -- common/autotest_common.sh@829 -- # '[' -z 212026 ']' 00:13:29.215 05:11:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.215 05:11:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.215 05:11:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.215 05:11:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.215 05:11:25 -- common/autotest_common.sh@10 -- # set +x 00:13:29.215 [2024-11-20 05:11:25.883665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:29.215 [2024-11-20 05:11:25.883711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.215 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.215 [2024-11-20 05:11:25.938673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.215 [2024-11-20 05:11:26.014310] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:29.215 [2024-11-20 05:11:26.014416] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.215 [2024-11-20 05:11:26.014422] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.215 [2024-11-20 05:11:26.014429] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.215 [2024-11-20 05:11:26.014474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.215 [2024-11-20 05:11:26.014573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.215 [2024-11-20 05:11:26.014658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.215 [2024-11-20 05:11:26.014659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.155 05:11:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.155 05:11:26 -- common/autotest_common.sh@862 -- # return 0 00:13:30.155 05:11:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:30.155 05:11:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.155 05:11:26 -- common/autotest_common.sh@10 -- # set +x 00:13:30.155 05:11:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.155 05:11:26 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:30.155 05:11:26 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5835 00:13:30.155 [2024-11-20 05:11:26.900798] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:30.155 05:11:26 -- target/invalid.sh@40 -- # out='request: 00:13:30.155 { 00:13:30.155 "nqn": "nqn.2016-06.io.spdk:cnode5835", 00:13:30.155 "tgt_name": "foobar", 00:13:30.155 "method": "nvmf_create_subsystem", 00:13:30.155 "req_id": 1 00:13:30.155 } 00:13:30.155 Got JSON-RPC error response 00:13:30.155 response: 00:13:30.155 { 00:13:30.155 "code": -32603, 00:13:30.155 "message": "Unable to find target foobar" 00:13:30.155 }' 00:13:30.155 05:11:26 -- target/invalid.sh@41 -- # [[ request: 00:13:30.155 { 00:13:30.155 "nqn": "nqn.2016-06.io.spdk:cnode5835", 00:13:30.155 "tgt_name": "foobar", 00:13:30.155 "method": "nvmf_create_subsystem", 00:13:30.155 "req_id": 1 00:13:30.155 } 00:13:30.155 Got JSON-RPC error response 00:13:30.155 response: 00:13:30.155 { 00:13:30.155 "code": -32603, 00:13:30.155 "message": "Unable to find target foobar" 00:13:30.155 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:30.155 05:11:26 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:30.155 05:11:26 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10077 00:13:30.414 [2024-11-20 05:11:27.085467] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10077: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:30.414 05:11:27 -- target/invalid.sh@45 -- # out='request: 00:13:30.414 { 00:13:30.414 "nqn": "nqn.2016-06.io.spdk:cnode10077", 00:13:30.414 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.414 "method": "nvmf_create_subsystem", 00:13:30.414 "req_id": 1 00:13:30.414 } 00:13:30.414 Got JSON-RPC error response 00:13:30.414 response: 00:13:30.414 { 00:13:30.414 "code": -32602, 00:13:30.414 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.414 }' 00:13:30.414 05:11:27 -- target/invalid.sh@46 -- # [[ request: 00:13:30.414 { 00:13:30.414 "nqn": "nqn.2016-06.io.spdk:cnode10077", 00:13:30.414 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.414 "method": "nvmf_create_subsystem", 00:13:30.414 "req_id": 1 00:13:30.414 } 00:13:30.414 Got JSON-RPC error response 00:13:30.414 response: 00:13:30.414 { 00:13:30.414 "code": -32602, 00:13:30.414 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.414 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.414 05:11:27 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:30.414 05:11:27 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27882 00:13:30.674 [2024-11-20 05:11:27.290115] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27882: invalid model number 'SPDK_Controller' 00:13:30.674 05:11:27 -- target/invalid.sh@50 -- # out='request: 00:13:30.674 { 00:13:30.674 "nqn": "nqn.2016-06.io.spdk:cnode27882", 00:13:30.674 "model_number": "SPDK_Controller\u001f", 00:13:30.674 "method": "nvmf_create_subsystem", 00:13:30.674 "req_id": 1 00:13:30.674 } 00:13:30.674 Got JSON-RPC error response 00:13:30.674 response: 00:13:30.674 { 00:13:30.674 "code": -32602, 00:13:30.674 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.674 }' 00:13:30.674 05:11:27 -- target/invalid.sh@51 -- # [[ request: 00:13:30.674 { 00:13:30.674 "nqn": "nqn.2016-06.io.spdk:cnode27882", 00:13:30.674 "model_number": "SPDK_Controller\u001f", 00:13:30.674 "method": "nvmf_create_subsystem", 00:13:30.674 "req_id": 1 00:13:30.674 } 00:13:30.674 Got JSON-RPC error response 00:13:30.674 response: 00:13:30.674 { 00:13:30.674 "code": -32602, 00:13:30.674 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.674 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.674 05:11:27 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:30.674 05:11:27 -- target/invalid.sh@19 -- # local length=21 ll 00:13:30.675 05:11:27 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.675 05:11:27 -- target/invalid.sh@21 -- # local chars 00:13:30.675 05:11:27 -- target/invalid.sh@22 -- # local string 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 72 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=H 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 103 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=g 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 54 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=6 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 111 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=o 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 76 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=L 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 68 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=D 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 96 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+='`' 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 96 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+='`' 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 62 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+='>' 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 37 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=% 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 77 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=M 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 106 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=j 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 59 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=';' 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 51 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=3 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 87 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=W 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 83 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=S 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 52 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=4 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 43 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=+ 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 60 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+='<' 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 95 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=_ 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # printf %x 99 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:30.675 05:11:27 -- target/invalid.sh@25 -- # string+=c 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.675 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.675 05:11:27 -- target/invalid.sh@28 -- # [[ H == \- ]] 00:13:30.675 05:11:27 -- target/invalid.sh@31 -- # echo 'Hg6oLD``>%Mj;3WS4+<_c' 00:13:30.675 05:11:27 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Hg6oLD``>%Mj;3WS4+<_c' nqn.2016-06.io.spdk:cnode31446 00:13:30.936 [2024-11-20 05:11:27.619219] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31446: invalid serial number 'Hg6oLD``>%Mj;3WS4+<_c' 00:13:30.936 05:11:27 -- target/invalid.sh@54 -- # out='request: 00:13:30.936 { 00:13:30.936 "nqn": "nqn.2016-06.io.spdk:cnode31446", 00:13:30.936 "serial_number": "Hg6oLD``>%Mj;3WS4+<_c", 00:13:30.936 "method": "nvmf_create_subsystem", 00:13:30.936 "req_id": 1 00:13:30.936 } 00:13:30.936 Got JSON-RPC error response 00:13:30.936 response: 00:13:30.936 { 00:13:30.936 "code": -32602, 00:13:30.936 "message": "Invalid SN Hg6oLD``>%Mj;3WS4+<_c" 00:13:30.936 }' 00:13:30.936 05:11:27 -- target/invalid.sh@55 -- # [[ request: 00:13:30.936 { 00:13:30.936 "nqn": "nqn.2016-06.io.spdk:cnode31446", 00:13:30.936 "serial_number": "Hg6oLD``>%Mj;3WS4+<_c", 00:13:30.936 "method": "nvmf_create_subsystem", 00:13:30.936 "req_id": 1 00:13:30.936 } 00:13:30.936 Got JSON-RPC error response 00:13:30.936 response: 00:13:30.936 { 00:13:30.936 "code": -32602, 00:13:30.936 "message": "Invalid SN Hg6oLD``>%Mj;3WS4+<_c" 00:13:30.936 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.936 05:11:27 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:30.936 05:11:27 -- target/invalid.sh@19 -- # local length=41 ll 00:13:30.936 05:11:27 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.936 05:11:27 -- target/invalid.sh@21 -- # local chars 00:13:30.936 05:11:27 -- target/invalid.sh@22 -- # local string 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # printf %x 101 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # string+=e 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # printf %x 42 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # string+='*' 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # printf %x 62 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # string+='>' 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # printf %x 38 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # string+='&' 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # printf %x 40 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # string+='(' 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # printf %x 126 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # string+='~' 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # printf %x 77 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # string+=M 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.936 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # printf %x 117 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:30.936 05:11:27 -- target/invalid.sh@25 -- # string+=u 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 53 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=5 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 54 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=6 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 95 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=_ 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 84 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=T 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 78 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=N 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 85 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=U 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 108 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=l 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 104 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=h 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 39 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # string+=\' 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 05:11:27 -- target/invalid.sh@25 -- # printf %x 88 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=X 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 98 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=b 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 43 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=+ 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 32 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=' ' 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 99 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=c 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 117 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=u 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 54 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=6 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 91 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+='[' 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 109 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=m 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 38 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+='&' 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 66 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=B 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 32 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=' ' 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 58 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=: 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 74 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # string+=J 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.197 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.197 05:11:27 -- target/invalid.sh@25 -- # printf %x 36 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+='$' 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 110 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+=n 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 88 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+=X 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 33 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+='!' 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 47 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+=/ 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 68 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+=D 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 94 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+='^' 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 45 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+=- 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 103 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+=g 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # printf %x 83 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:31.198 05:11:27 -- target/invalid.sh@25 -- # string+=S 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.198 05:11:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.198 05:11:27 -- target/invalid.sh@28 -- # [[ e == \- ]] 00:13:31.198 05:11:27 -- target/invalid.sh@31 -- # echo 'e*>&(~Mu56_TNUlh'\''Xb+ cu6[m&B :J$nX!/D^-gS' 00:13:31.198 05:11:27 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'e*>&(~Mu56_TNUlh'\''Xb+ cu6[m&B :J$nX!/D^-gS' nqn.2016-06.io.spdk:cnode1330 00:13:31.457 [2024-11-20 05:11:28.060667] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1330: invalid model number 'e*>&(~Mu56_TNUlh'Xb+ cu6[m&B :J$nX!/D^-gS' 00:13:31.457 05:11:28 -- target/invalid.sh@58 -- # out='request: 00:13:31.457 { 00:13:31.457 "nqn": "nqn.2016-06.io.spdk:cnode1330", 00:13:31.457 "model_number": "e*>&(~Mu56_TNUlh'\''Xb+ cu6[m&B :J$nX!/D^-gS", 00:13:31.457 "method": "nvmf_create_subsystem", 00:13:31.457 "req_id": 1 00:13:31.457 } 00:13:31.457 Got JSON-RPC error response 00:13:31.457 response: 00:13:31.457 { 00:13:31.457 "code": -32602, 00:13:31.457 "message": "Invalid MN e*>&(~Mu56_TNUlh'\''Xb+ cu6[m&B :J$nX!/D^-gS" 00:13:31.457 }' 00:13:31.457 05:11:28 -- target/invalid.sh@59 -- # [[ request: 00:13:31.457 { 00:13:31.457 "nqn": "nqn.2016-06.io.spdk:cnode1330", 00:13:31.457 "model_number": "e*>&(~Mu56_TNUlh'Xb+ cu6[m&B :J$nX!/D^-gS", 00:13:31.457 "method": "nvmf_create_subsystem", 00:13:31.457 "req_id": 1 00:13:31.457 } 00:13:31.457 Got JSON-RPC error response 00:13:31.457 response: 00:13:31.457 { 00:13:31.457 "code": -32602, 00:13:31.457 "message": "Invalid MN e*>&(~Mu56_TNUlh'Xb+ cu6[m&B :J$nX!/D^-gS" 00:13:31.457 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:31.457 05:11:28 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:13:31.457 [2024-11-20 05:11:28.262264] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x13159e0/0x1315020) succeed. 00:13:31.457 [2024-11-20 05:11:28.271116] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1316d50/0x13155a0) succeed. 00:13:31.457 [2024-11-20 05:11:28.271139] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:31.457 [2024-11-20 05:11:28.273740] iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:13:31.457 [2024-11-20 05:11:28.273764] iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:13:31.457 [2024-11-20 05:11:28.274243] transport.c: 625:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:13:31.457 [2024-11-20 05:11:28.275407] iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:13:31.457 [2024-11-20 05:11:28.275428] iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:13:31.457 [2024-11-20 05:11:28.275912] transport.c: 625:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:13:31.457 [2024-11-20 05:11:28.277037] iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:13:31.457 [2024-11-20 05:11:28.277056] iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:13:31.457 [2024-11-20 05:11:28.277541] transport.c: 625:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:13:31.717 05:11:28 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:31.717 05:11:28 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:13:31.717 05:11:28 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:13:31.717 192.168.100.9' 00:13:31.717 05:11:28 -- target/invalid.sh@67 -- # head -n 1 00:13:31.717 05:11:28 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:13:31.717 05:11:28 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:13:31.976 [2024-11-20 05:11:28.668703] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:31.976 05:11:28 -- target/invalid.sh@69 -- # out='request: 00:13:31.977 { 00:13:31.977 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.977 "listen_address": { 00:13:31.977 "trtype": "rdma", 00:13:31.977 "traddr": "192.168.100.8", 00:13:31.977 "trsvcid": "4421" 00:13:31.977 }, 00:13:31.977 "method": "nvmf_subsystem_remove_listener", 00:13:31.977 "req_id": 1 00:13:31.977 } 00:13:31.977 Got JSON-RPC error response 00:13:31.977 response: 00:13:31.977 { 00:13:31.977 "code": -32602, 00:13:31.977 "message": "Invalid parameters" 00:13:31.977 }' 00:13:31.977 05:11:28 -- target/invalid.sh@70 -- # [[ request: 00:13:31.977 { 00:13:31.977 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.977 "listen_address": { 00:13:31.977 "trtype": "rdma", 00:13:31.977 "traddr": "192.168.100.8", 00:13:31.977 "trsvcid": "4421" 00:13:31.977 }, 00:13:31.977 "method": "nvmf_subsystem_remove_listener", 00:13:31.977 "req_id": 1 00:13:31.977 } 00:13:31.977 Got JSON-RPC error response 00:13:31.977 response: 00:13:31.977 { 00:13:31.977 "code": -32602, 00:13:31.977 "message": "Invalid parameters" 00:13:31.977 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:31.977 05:11:28 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10093 -i 0 00:13:32.236 [2024-11-20 05:11:28.873412] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10093: invalid cntlid range [0-65519] 00:13:32.236 05:11:28 -- target/invalid.sh@73 -- # out='request: 00:13:32.236 { 00:13:32.236 "nqn": "nqn.2016-06.io.spdk:cnode10093", 00:13:32.236 "min_cntlid": 0, 00:13:32.236 "method": "nvmf_create_subsystem", 00:13:32.236 "req_id": 1 00:13:32.236 } 00:13:32.236 Got JSON-RPC error response 00:13:32.236 response: 00:13:32.236 { 00:13:32.236 "code": -32602, 00:13:32.236 "message": "Invalid cntlid range [0-65519]" 00:13:32.236 }' 00:13:32.236 05:11:28 -- target/invalid.sh@74 -- # [[ request: 00:13:32.236 { 00:13:32.236 "nqn": "nqn.2016-06.io.spdk:cnode10093", 00:13:32.236 "min_cntlid": 0, 00:13:32.236 "method": "nvmf_create_subsystem", 00:13:32.236 "req_id": 1 00:13:32.236 } 00:13:32.236 Got JSON-RPC error response 00:13:32.236 response: 00:13:32.236 { 00:13:32.236 "code": -32602, 00:13:32.236 "message": "Invalid cntlid range [0-65519]" 00:13:32.236 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.236 05:11:28 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3687 -i 65520 00:13:32.236 [2024-11-20 05:11:29.062110] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3687: invalid cntlid range [65520-65519] 00:13:32.496 05:11:29 -- target/invalid.sh@75 -- # out='request: 00:13:32.496 { 00:13:32.496 "nqn": "nqn.2016-06.io.spdk:cnode3687", 00:13:32.496 "min_cntlid": 65520, 00:13:32.496 "method": "nvmf_create_subsystem", 00:13:32.496 "req_id": 1 00:13:32.496 } 00:13:32.496 Got JSON-RPC error response 00:13:32.496 response: 00:13:32.496 { 00:13:32.496 "code": -32602, 00:13:32.496 "message": "Invalid cntlid range [65520-65519]" 00:13:32.496 }' 00:13:32.496 05:11:29 -- target/invalid.sh@76 -- # [[ request: 00:13:32.496 { 00:13:32.496 "nqn": "nqn.2016-06.io.spdk:cnode3687", 00:13:32.496 "min_cntlid": 65520, 00:13:32.496 "method": "nvmf_create_subsystem", 00:13:32.496 "req_id": 1 00:13:32.496 } 00:13:32.496 Got JSON-RPC error response 00:13:32.496 response: 00:13:32.496 { 00:13:32.496 "code": -32602, 00:13:32.496 "message": "Invalid cntlid range [65520-65519]" 00:13:32.496 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.496 05:11:29 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30849 -I 0 00:13:32.496 [2024-11-20 05:11:29.238734] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30849: invalid cntlid range [1-0] 00:13:32.496 05:11:29 -- target/invalid.sh@77 -- # out='request: 00:13:32.496 { 00:13:32.496 "nqn": "nqn.2016-06.io.spdk:cnode30849", 00:13:32.496 "max_cntlid": 0, 00:13:32.496 "method": "nvmf_create_subsystem", 00:13:32.496 "req_id": 1 00:13:32.496 } 00:13:32.496 Got JSON-RPC error response 00:13:32.496 response: 00:13:32.496 { 00:13:32.496 "code": -32602, 00:13:32.496 "message": "Invalid cntlid range [1-0]" 00:13:32.496 }' 00:13:32.496 05:11:29 -- target/invalid.sh@78 -- # [[ request: 00:13:32.496 { 00:13:32.496 "nqn": "nqn.2016-06.io.spdk:cnode30849", 00:13:32.496 "max_cntlid": 0, 00:13:32.496 "method": "nvmf_create_subsystem", 00:13:32.496 "req_id": 1 00:13:32.496 } 00:13:32.496 Got JSON-RPC error response 00:13:32.496 response: 00:13:32.496 { 00:13:32.496 "code": -32602, 00:13:32.496 "message": "Invalid cntlid range [1-0]" 00:13:32.496 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.496 05:11:29 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16386 -I 65520 00:13:32.756 [2024-11-20 05:11:29.411365] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16386: invalid cntlid range [1-65520] 00:13:32.756 05:11:29 -- target/invalid.sh@79 -- # out='request: 00:13:32.756 { 00:13:32.756 "nqn": "nqn.2016-06.io.spdk:cnode16386", 00:13:32.756 "max_cntlid": 65520, 00:13:32.756 "method": "nvmf_create_subsystem", 00:13:32.756 "req_id": 1 00:13:32.756 } 00:13:32.756 Got JSON-RPC error response 00:13:32.756 response: 00:13:32.756 { 00:13:32.756 "code": -32602, 00:13:32.756 "message": "Invalid cntlid range [1-65520]" 00:13:32.756 }' 00:13:32.756 05:11:29 -- target/invalid.sh@80 -- # [[ request: 00:13:32.756 { 00:13:32.756 "nqn": "nqn.2016-06.io.spdk:cnode16386", 00:13:32.756 "max_cntlid": 65520, 00:13:32.756 "method": "nvmf_create_subsystem", 00:13:32.756 "req_id": 1 00:13:32.756 } 00:13:32.756 Got JSON-RPC error response 00:13:32.756 response: 00:13:32.756 { 00:13:32.756 "code": -32602, 00:13:32.756 "message": "Invalid cntlid range [1-65520]" 00:13:32.756 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.756 05:11:29 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26709 -i 6 -I 5 00:13:33.016 [2024-11-20 05:11:29.583975] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26709: invalid cntlid range [6-5] 00:13:33.016 05:11:29 -- target/invalid.sh@83 -- # out='request: 00:13:33.016 { 00:13:33.016 "nqn": "nqn.2016-06.io.spdk:cnode26709", 00:13:33.016 "min_cntlid": 6, 00:13:33.016 "max_cntlid": 5, 00:13:33.016 "method": "nvmf_create_subsystem", 00:13:33.016 "req_id": 1 00:13:33.016 } 00:13:33.016 Got JSON-RPC error response 00:13:33.016 response: 00:13:33.016 { 00:13:33.016 "code": -32602, 00:13:33.016 "message": "Invalid cntlid range [6-5]" 00:13:33.016 }' 00:13:33.016 05:11:29 -- target/invalid.sh@84 -- # [[ request: 00:13:33.016 { 00:13:33.016 "nqn": "nqn.2016-06.io.spdk:cnode26709", 00:13:33.016 "min_cntlid": 6, 00:13:33.016 "max_cntlid": 5, 00:13:33.016 "method": "nvmf_create_subsystem", 00:13:33.016 "req_id": 1 00:13:33.016 } 00:13:33.016 Got JSON-RPC error response 00:13:33.016 response: 00:13:33.016 { 00:13:33.016 "code": -32602, 00:13:33.016 "message": "Invalid cntlid range [6-5]" 00:13:33.016 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:33.016 05:11:29 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:33.016 05:11:29 -- target/invalid.sh@87 -- # out='request: 00:13:33.016 { 00:13:33.016 "name": "foobar", 00:13:33.016 "method": "nvmf_delete_target", 00:13:33.016 "req_id": 1 00:13:33.016 } 00:13:33.016 Got JSON-RPC error response 00:13:33.016 response: 00:13:33.016 { 00:13:33.016 "code": -32602, 00:13:33.016 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:33.016 }' 00:13:33.016 05:11:29 -- target/invalid.sh@88 -- # [[ request: 00:13:33.016 { 00:13:33.016 "name": "foobar", 00:13:33.016 "method": "nvmf_delete_target", 00:13:33.016 "req_id": 1 00:13:33.016 } 00:13:33.016 Got JSON-RPC error response 00:13:33.016 response: 00:13:33.016 { 00:13:33.016 "code": -32602, 00:13:33.016 "message": "The specified target doesn't exist, cannot delete it." 00:13:33.016 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:33.016 05:11:29 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:33.016 05:11:29 -- target/invalid.sh@91 -- # nvmftestfini 00:13:33.016 05:11:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:33.016 05:11:29 -- nvmf/common.sh@116 -- # sync 00:13:33.016 05:11:29 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:13:33.016 05:11:29 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:13:33.016 05:11:29 -- nvmf/common.sh@119 -- # set +e 00:13:33.016 05:11:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:33.016 05:11:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:13:33.016 rmmod nvme_rdma 00:13:33.016 rmmod nvme_fabrics 00:13:33.016 05:11:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:33.016 05:11:29 -- nvmf/common.sh@123 -- # set -e 00:13:33.016 05:11:29 -- nvmf/common.sh@124 -- # return 0 00:13:33.016 05:11:29 -- nvmf/common.sh@477 -- # '[' -n 212026 ']' 00:13:33.016 05:11:29 -- nvmf/common.sh@478 -- # killprocess 212026 00:13:33.016 05:11:29 -- common/autotest_common.sh@936 -- # '[' -z 212026 ']' 00:13:33.016 05:11:29 -- common/autotest_common.sh@940 -- # kill -0 212026 00:13:33.017 05:11:29 -- common/autotest_common.sh@941 -- # uname 00:13:33.017 05:11:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:33.017 05:11:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 212026 00:13:33.017 05:11:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:33.017 05:11:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:33.017 05:11:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 212026' 00:13:33.017 killing process with pid 212026 00:13:33.017 05:11:29 -- common/autotest_common.sh@955 -- # kill 212026 00:13:33.017 05:11:29 -- common/autotest_common.sh@960 -- # wait 212026 00:13:33.277 05:11:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:33.277 05:11:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:13:33.277 00:13:33.277 real 0m9.687s 00:13:33.277 user 0m19.965s 00:13:33.277 sys 0m4.827s 00:13:33.277 05:11:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:33.277 05:11:30 -- common/autotest_common.sh@10 -- # set +x 00:13:33.277 ************************************ 00:13:33.277 END TEST nvmf_invalid 00:13:33.277 ************************************ 00:13:33.277 05:11:30 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:13:33.277 05:11:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:33.277 05:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:33.277 05:11:30 -- common/autotest_common.sh@10 -- # set +x 00:13:33.277 ************************************ 00:13:33.277 START TEST nvmf_abort 00:13:33.277 ************************************ 00:13:33.277 05:11:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:13:33.537 * Looking for test storage... 00:13:33.537 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:13:33.537 05:11:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:33.537 05:11:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:33.537 05:11:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:33.537 05:11:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:33.537 05:11:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:33.537 05:11:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:33.537 05:11:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:33.537 05:11:30 -- scripts/common.sh@335 -- # IFS=.-: 00:13:33.537 05:11:30 -- scripts/common.sh@335 -- # read -ra ver1 00:13:33.537 05:11:30 -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.537 05:11:30 -- scripts/common.sh@336 -- # read -ra ver2 00:13:33.537 05:11:30 -- scripts/common.sh@337 -- # local 'op=<' 00:13:33.537 05:11:30 -- scripts/common.sh@339 -- # ver1_l=2 00:13:33.537 05:11:30 -- scripts/common.sh@340 -- # ver2_l=1 00:13:33.537 05:11:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:33.537 05:11:30 -- scripts/common.sh@343 -- # case "$op" in 00:13:33.537 05:11:30 -- scripts/common.sh@344 -- # : 1 00:13:33.537 05:11:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:33.537 05:11:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.537 05:11:30 -- scripts/common.sh@364 -- # decimal 1 00:13:33.537 05:11:30 -- scripts/common.sh@352 -- # local d=1 00:13:33.537 05:11:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.537 05:11:30 -- scripts/common.sh@354 -- # echo 1 00:13:33.537 05:11:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:33.537 05:11:30 -- scripts/common.sh@365 -- # decimal 2 00:13:33.537 05:11:30 -- scripts/common.sh@352 -- # local d=2 00:13:33.537 05:11:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.537 05:11:30 -- scripts/common.sh@354 -- # echo 2 00:13:33.537 05:11:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:33.537 05:11:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:33.537 05:11:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:33.537 05:11:30 -- scripts/common.sh@367 -- # return 0 00:13:33.537 05:11:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.537 05:11:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:33.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.537 --rc genhtml_branch_coverage=1 00:13:33.537 --rc genhtml_function_coverage=1 00:13:33.537 --rc genhtml_legend=1 00:13:33.537 --rc geninfo_all_blocks=1 00:13:33.537 --rc geninfo_unexecuted_blocks=1 00:13:33.537 00:13:33.537 ' 00:13:33.537 05:11:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:33.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.537 --rc genhtml_branch_coverage=1 00:13:33.537 --rc genhtml_function_coverage=1 00:13:33.537 --rc genhtml_legend=1 00:13:33.537 --rc geninfo_all_blocks=1 00:13:33.537 --rc geninfo_unexecuted_blocks=1 00:13:33.537 00:13:33.537 ' 00:13:33.537 05:11:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:33.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.537 --rc genhtml_branch_coverage=1 00:13:33.537 --rc genhtml_function_coverage=1 00:13:33.537 --rc genhtml_legend=1 00:13:33.537 --rc geninfo_all_blocks=1 00:13:33.537 --rc geninfo_unexecuted_blocks=1 00:13:33.537 00:13:33.537 ' 00:13:33.537 05:11:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:33.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.537 --rc genhtml_branch_coverage=1 00:13:33.537 --rc genhtml_function_coverage=1 00:13:33.537 --rc genhtml_legend=1 00:13:33.537 --rc geninfo_all_blocks=1 00:13:33.537 --rc geninfo_unexecuted_blocks=1 00:13:33.537 00:13:33.537 ' 00:13:33.537 05:11:30 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.537 05:11:30 -- nvmf/common.sh@7 -- # uname -s 00:13:33.537 05:11:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.537 05:11:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.537 05:11:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.537 05:11:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.537 05:11:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.537 05:11:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.537 05:11:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.537 05:11:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.537 05:11:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.537 05:11:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.537 05:11:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:33.537 05:11:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:33.537 05:11:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.537 05:11:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.537 05:11:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:33.537 05:11:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:13:33.537 05:11:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.537 05:11:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.537 05:11:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.537 05:11:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.537 05:11:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.538 05:11:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.538 05:11:30 -- paths/export.sh@5 -- # export PATH 00:13:33.538 05:11:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.538 05:11:30 -- nvmf/common.sh@46 -- # : 0 00:13:33.538 05:11:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:33.538 05:11:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:33.538 05:11:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:33.538 05:11:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.538 05:11:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.538 05:11:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:33.538 05:11:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:33.538 05:11:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:33.538 05:11:30 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.538 05:11:30 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:33.538 05:11:30 -- target/abort.sh@14 -- # nvmftestinit 00:13:33.538 05:11:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:13:33.538 05:11:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.538 05:11:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:33.538 05:11:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:33.538 05:11:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:33.538 05:11:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.538 05:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.538 05:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.538 05:11:30 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:33.538 05:11:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:33.538 05:11:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:33.538 05:11:30 -- common/autotest_common.sh@10 -- # set +x 00:13:38.823 05:11:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:38.823 05:11:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:38.823 05:11:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:38.823 05:11:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:38.823 05:11:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:38.823 05:11:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:38.823 05:11:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:38.823 05:11:35 -- nvmf/common.sh@294 -- # net_devs=() 00:13:38.823 05:11:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:38.823 05:11:35 -- nvmf/common.sh@295 -- # e810=() 00:13:38.823 05:11:35 -- nvmf/common.sh@295 -- # local -ga e810 00:13:38.823 05:11:35 -- nvmf/common.sh@296 -- # x722=() 00:13:38.823 05:11:35 -- nvmf/common.sh@296 -- # local -ga x722 00:13:38.823 05:11:35 -- nvmf/common.sh@297 -- # mlx=() 00:13:38.823 05:11:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:38.823 05:11:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.823 05:11:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.823 05:11:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.823 05:11:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.823 05:11:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.823 05:11:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.823 05:11:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.823 05:11:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.823 05:11:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.824 05:11:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.824 05:11:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.824 05:11:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:38.824 05:11:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:13:38.824 05:11:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:13:38.824 05:11:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:38.824 05:11:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:38.824 05:11:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:38.824 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:38.824 05:11:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:13:38.824 05:11:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:38.824 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:38.824 05:11:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:13:38.824 05:11:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:38.824 05:11:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:13:38.824 05:11:35 -- nvmf/common.sh@376 -- # modinfo irdma 00:13:38.824 05:11:35 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:13:38.824 05:11:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.824 05:11:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:38.824 05:11:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.824 05:11:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:38.824 Found net devices under 0000:af:00.0: cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.824 05:11:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.824 05:11:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:38.824 05:11:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.824 05:11:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:38.824 Found net devices under 0000:af:00.1: cvl_0_1 00:13:38.824 05:11:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.824 05:11:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:38.824 05:11:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:38.824 05:11:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@408 -- # rdma_device_init 00:13:38.824 05:11:35 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:13:38.824 05:11:35 -- nvmf/common.sh@57 -- # uname 00:13:38.824 05:11:35 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:13:38.824 05:11:35 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:13:38.824 05:11:35 -- nvmf/common.sh@62 -- # modprobe ib_core 00:13:38.824 05:11:35 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:13:38.824 05:11:35 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:13:38.824 05:11:35 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:13:38.824 05:11:35 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:13:38.824 05:11:35 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:13:38.824 05:11:35 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:13:38.824 05:11:35 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:38.824 05:11:35 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:13:38.824 05:11:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:38.824 05:11:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:13:38.824 05:11:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:13:38.824 05:11:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:38.824 05:11:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:13:38.824 05:11:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@104 -- # continue 2 00:13:38.824 05:11:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:13:38.824 05:11:35 -- nvmf/common.sh@104 -- # continue 2 00:13:38.824 05:11:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:13:38.824 05:11:35 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:38.824 05:11:35 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:13:38.824 05:11:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:13:38.824 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:38.824 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:38.824 altname enp175s0f0np0 00:13:38.824 altname ens801f0np0 00:13:38.824 inet 192.168.100.8/24 scope global cvl_0_0 00:13:38.824 valid_lft forever preferred_lft forever 00:13:38.824 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:38.824 valid_lft forever preferred_lft forever 00:13:38.824 05:11:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:13:38.824 05:11:35 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:13:38.824 05:11:35 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:38.824 05:11:35 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:13:38.824 05:11:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:13:38.824 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:38.824 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:38.824 altname enp175s0f1np1 00:13:38.824 altname ens801f1np1 00:13:38.824 inet 192.168.100.9/24 scope global cvl_0_1 00:13:38.824 valid_lft forever preferred_lft forever 00:13:38.824 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:38.824 valid_lft forever preferred_lft forever 00:13:38.824 05:11:35 -- nvmf/common.sh@410 -- # return 0 00:13:38.824 05:11:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:38.824 05:11:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:38.824 05:11:35 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:13:38.824 05:11:35 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:13:38.824 05:11:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:38.824 05:11:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:13:38.824 05:11:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:13:38.824 05:11:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:38.824 05:11:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:13:38.824 05:11:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@104 -- # continue 2 00:13:38.824 05:11:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.824 05:11:35 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:38.824 05:11:35 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:13:38.824 05:11:35 -- nvmf/common.sh@104 -- # continue 2 00:13:38.824 05:11:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:13:38.824 05:11:35 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:38.824 05:11:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:13:38.824 05:11:35 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:13:38.824 05:11:35 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:13:38.824 05:11:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:13:38.825 05:11:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:38.825 05:11:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:38.825 05:11:35 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:13:38.825 192.168.100.9' 00:13:38.825 05:11:35 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:13:38.825 192.168.100.9' 00:13:38.825 05:11:35 -- nvmf/common.sh@445 -- # head -n 1 00:13:38.825 05:11:35 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:38.825 05:11:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:38.825 192.168.100.9' 00:13:38.825 05:11:35 -- nvmf/common.sh@446 -- # tail -n +2 00:13:38.825 05:11:35 -- nvmf/common.sh@446 -- # head -n 1 00:13:38.825 05:11:35 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:38.825 05:11:35 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:13:38.825 05:11:35 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:38.825 05:11:35 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:13:38.825 05:11:35 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:13:38.825 05:11:35 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:13:38.825 05:11:35 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:38.825 05:11:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:38.825 05:11:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:38.825 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:13:38.825 05:11:35 -- nvmf/common.sh@469 -- # nvmfpid=215855 00:13:38.825 05:11:35 -- nvmf/common.sh@470 -- # waitforlisten 215855 00:13:38.825 05:11:35 -- common/autotest_common.sh@829 -- # '[' -z 215855 ']' 00:13:38.825 05:11:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.825 05:11:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.825 05:11:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:38.825 05:11:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.825 05:11:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.825 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:13:38.825 [2024-11-20 05:11:35.431793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:38.825 [2024-11-20 05:11:35.431836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.825 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.825 [2024-11-20 05:11:35.487949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.825 [2024-11-20 05:11:35.563132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:38.825 [2024-11-20 05:11:35.563240] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.825 [2024-11-20 05:11:35.563248] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.825 [2024-11-20 05:11:35.563254] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.825 [2024-11-20 05:11:35.563357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.825 [2024-11-20 05:11:35.563445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.825 [2024-11-20 05:11:35.563445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.764 05:11:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.765 05:11:36 -- common/autotest_common.sh@862 -- # return 0 00:13:39.765 05:11:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:39.765 05:11:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.765 05:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 05:11:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.765 05:11:36 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:13:39.765 05:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.765 05:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 [2024-11-20 05:11:36.307921] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x103e8d0/0x103df10) succeed. 00:13:39.765 [2024-11-20 05:11:36.316575] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x103fbc0/0x103e490) succeed. 00:13:39.765 [2024-11-20 05:11:36.316598] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:39.765 05:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.765 05:11:36 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:39.765 05:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.765 05:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 Malloc0 00:13:39.765 05:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.765 05:11:36 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:39.765 05:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.765 05:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 Delay0 00:13:39.765 05:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.765 05:11:36 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:39.765 05:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.765 05:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 05:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.765 05:11:36 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:39.765 05:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.765 05:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 05:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.765 05:11:36 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:39.765 05:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.765 05:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 [2024-11-20 05:11:36.391245] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:39.765 05:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.765 05:11:36 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:39.765 05:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.765 05:11:36 -- common/autotest_common.sh@10 -- # set +x 00:13:39.765 05:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.765 05:11:36 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:39.765 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.765 [2024-11-20 05:11:36.475908] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:42.305 Initializing NVMe Controllers 00:13:42.305 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:13:42.305 controller IO queue size 128 less than required 00:13:42.305 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:42.305 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:42.305 Initialization complete. Launching workers. 00:13:42.305 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51199 00:13:42.305 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51260, failed to submit 62 00:13:42.305 success 51199, unsuccess 61, failed 0 00:13:42.305 05:11:38 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:42.305 05:11:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.305 05:11:38 -- common/autotest_common.sh@10 -- # set +x 00:13:42.305 05:11:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.305 05:11:38 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:42.305 05:11:38 -- target/abort.sh@38 -- # nvmftestfini 00:13:42.305 05:11:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:42.305 05:11:38 -- nvmf/common.sh@116 -- # sync 00:13:42.305 05:11:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:13:42.305 05:11:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:13:42.305 05:11:38 -- nvmf/common.sh@119 -- # set +e 00:13:42.305 05:11:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:42.305 05:11:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:13:42.305 rmmod nvme_rdma 00:13:42.305 rmmod nvme_fabrics 00:13:42.305 05:11:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:42.305 05:11:38 -- nvmf/common.sh@123 -- # set -e 00:13:42.305 05:11:38 -- nvmf/common.sh@124 -- # return 0 00:13:42.305 05:11:38 -- nvmf/common.sh@477 -- # '[' -n 215855 ']' 00:13:42.305 05:11:38 -- nvmf/common.sh@478 -- # killprocess 215855 00:13:42.305 05:11:38 -- common/autotest_common.sh@936 -- # '[' -z 215855 ']' 00:13:42.305 05:11:38 -- common/autotest_common.sh@940 -- # kill -0 215855 00:13:42.305 05:11:38 -- common/autotest_common.sh@941 -- # uname 00:13:42.305 05:11:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.305 05:11:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 215855 00:13:42.305 05:11:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:42.305 05:11:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:42.305 05:11:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 215855' 00:13:42.305 killing process with pid 215855 00:13:42.305 05:11:38 -- common/autotest_common.sh@955 -- # kill 215855 00:13:42.305 05:11:38 -- common/autotest_common.sh@960 -- # wait 215855 00:13:42.305 05:11:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:42.305 05:11:38 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:13:42.305 00:13:42.305 real 0m8.859s 00:13:42.305 user 0m13.849s 00:13:42.305 sys 0m4.229s 00:13:42.305 05:11:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:42.305 05:11:38 -- common/autotest_common.sh@10 -- # set +x 00:13:42.305 ************************************ 00:13:42.305 END TEST nvmf_abort 00:13:42.305 ************************************ 00:13:42.305 05:11:38 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:13:42.305 05:11:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:42.305 05:11:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:42.305 05:11:38 -- common/autotest_common.sh@10 -- # set +x 00:13:42.305 ************************************ 00:13:42.305 START TEST nvmf_ns_hotplug_stress 00:13:42.305 ************************************ 00:13:42.305 05:11:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:13:42.305 * Looking for test storage... 00:13:42.305 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:13:42.305 05:11:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:42.305 05:11:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:42.305 05:11:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:42.305 05:11:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:42.305 05:11:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:42.305 05:11:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:42.305 05:11:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:42.305 05:11:39 -- scripts/common.sh@335 -- # IFS=.-: 00:13:42.305 05:11:39 -- scripts/common.sh@335 -- # read -ra ver1 00:13:42.305 05:11:39 -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.305 05:11:39 -- scripts/common.sh@336 -- # read -ra ver2 00:13:42.305 05:11:39 -- scripts/common.sh@337 -- # local 'op=<' 00:13:42.305 05:11:39 -- scripts/common.sh@339 -- # ver1_l=2 00:13:42.305 05:11:39 -- scripts/common.sh@340 -- # ver2_l=1 00:13:42.305 05:11:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:42.305 05:11:39 -- scripts/common.sh@343 -- # case "$op" in 00:13:42.305 05:11:39 -- scripts/common.sh@344 -- # : 1 00:13:42.305 05:11:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:42.305 05:11:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.305 05:11:39 -- scripts/common.sh@364 -- # decimal 1 00:13:42.305 05:11:39 -- scripts/common.sh@352 -- # local d=1 00:13:42.305 05:11:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.305 05:11:39 -- scripts/common.sh@354 -- # echo 1 00:13:42.305 05:11:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:42.305 05:11:39 -- scripts/common.sh@365 -- # decimal 2 00:13:42.305 05:11:39 -- scripts/common.sh@352 -- # local d=2 00:13:42.305 05:11:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.305 05:11:39 -- scripts/common.sh@354 -- # echo 2 00:13:42.305 05:11:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:42.305 05:11:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:42.305 05:11:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:42.305 05:11:39 -- scripts/common.sh@367 -- # return 0 00:13:42.305 05:11:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.305 05:11:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.305 --rc genhtml_branch_coverage=1 00:13:42.305 --rc genhtml_function_coverage=1 00:13:42.305 --rc genhtml_legend=1 00:13:42.305 --rc geninfo_all_blocks=1 00:13:42.305 --rc geninfo_unexecuted_blocks=1 00:13:42.305 00:13:42.305 ' 00:13:42.305 05:11:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.305 --rc genhtml_branch_coverage=1 00:13:42.305 --rc genhtml_function_coverage=1 00:13:42.305 --rc genhtml_legend=1 00:13:42.305 --rc geninfo_all_blocks=1 00:13:42.305 --rc geninfo_unexecuted_blocks=1 00:13:42.305 00:13:42.305 ' 00:13:42.305 05:11:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.305 --rc genhtml_branch_coverage=1 00:13:42.305 --rc genhtml_function_coverage=1 00:13:42.305 --rc genhtml_legend=1 00:13:42.305 --rc geninfo_all_blocks=1 00:13:42.305 --rc geninfo_unexecuted_blocks=1 00:13:42.305 00:13:42.305 ' 00:13:42.305 05:11:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.305 --rc genhtml_branch_coverage=1 00:13:42.305 --rc genhtml_function_coverage=1 00:13:42.305 --rc genhtml_legend=1 00:13:42.305 --rc geninfo_all_blocks=1 00:13:42.305 --rc geninfo_unexecuted_blocks=1 00:13:42.305 00:13:42.305 ' 00:13:42.305 05:11:39 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.305 05:11:39 -- nvmf/common.sh@7 -- # uname -s 00:13:42.305 05:11:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.305 05:11:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.305 05:11:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.305 05:11:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.305 05:11:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.305 05:11:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.305 05:11:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.305 05:11:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.305 05:11:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.305 05:11:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.574 05:11:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:42.574 05:11:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:42.574 05:11:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.574 05:11:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.574 05:11:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:42.574 05:11:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:13:42.574 05:11:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.574 05:11:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.574 05:11:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.574 05:11:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.574 05:11:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.575 05:11:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.575 05:11:39 -- paths/export.sh@5 -- # export PATH 00:13:42.575 05:11:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.575 05:11:39 -- nvmf/common.sh@46 -- # : 0 00:13:42.575 05:11:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:42.575 05:11:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:42.575 05:11:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:42.575 05:11:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.575 05:11:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.575 05:11:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:42.575 05:11:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:42.575 05:11:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:42.575 05:11:39 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:13:42.575 05:11:39 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:42.575 05:11:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:13:42.575 05:11:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.575 05:11:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:42.575 05:11:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:42.575 05:11:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:42.575 05:11:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.575 05:11:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.575 05:11:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.575 05:11:39 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:42.575 05:11:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:42.575 05:11:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:42.575 05:11:39 -- common/autotest_common.sh@10 -- # set +x 00:13:47.860 05:11:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:47.860 05:11:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:47.860 05:11:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:47.860 05:11:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:47.860 05:11:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:47.860 05:11:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:47.860 05:11:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:47.860 05:11:43 -- nvmf/common.sh@294 -- # net_devs=() 00:13:47.860 05:11:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:47.860 05:11:43 -- nvmf/common.sh@295 -- # e810=() 00:13:47.860 05:11:43 -- nvmf/common.sh@295 -- # local -ga e810 00:13:47.860 05:11:43 -- nvmf/common.sh@296 -- # x722=() 00:13:47.860 05:11:43 -- nvmf/common.sh@296 -- # local -ga x722 00:13:47.860 05:11:43 -- nvmf/common.sh@297 -- # mlx=() 00:13:47.860 05:11:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:47.860 05:11:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.860 05:11:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:47.860 05:11:43 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:13:47.860 05:11:43 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:13:47.860 05:11:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:47.860 05:11:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:47.860 05:11:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:47.860 05:11:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:47.860 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:47.860 05:11:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:13:47.860 05:11:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:47.860 05:11:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:47.860 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:47.860 05:11:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:13:47.860 05:11:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:47.860 05:11:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:13:47.860 05:11:43 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:47.861 05:11:43 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:13:47.861 05:11:43 -- nvmf/common.sh@376 -- # modinfo irdma 00:13:47.861 05:11:43 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:13:47.861 05:11:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:47.861 05:11:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.861 05:11:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:47.861 05:11:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.861 05:11:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:47.861 Found net devices under 0000:af:00.0: cvl_0_0 00:13:47.861 05:11:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.861 05:11:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:47.861 05:11:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.861 05:11:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:47.861 05:11:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.861 05:11:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:47.861 Found net devices under 0000:af:00.1: cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.861 05:11:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:47.861 05:11:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:47.861 05:11:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:13:47.861 05:11:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:13:47.861 05:11:44 -- nvmf/common.sh@57 -- # uname 00:13:47.861 05:11:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:13:47.861 05:11:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:13:47.861 05:11:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:13:47.861 05:11:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:13:47.861 05:11:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:13:47.861 05:11:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:13:47.861 05:11:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:13:47.861 05:11:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:13:47.861 05:11:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:13:47.861 05:11:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:47.861 05:11:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:13:47.861 05:11:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:47.861 05:11:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:13:47.861 05:11:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:13:47.861 05:11:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:47.861 05:11:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:13:47.861 05:11:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:13:47.861 05:11:44 -- nvmf/common.sh@104 -- # continue 2 00:13:47.861 05:11:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@104 -- # continue 2 00:13:47.861 05:11:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:13:47.861 05:11:44 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:13:47.861 05:11:44 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:13:47.861 05:11:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:13:47.861 05:11:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:13:47.861 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:47.861 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:47.861 altname enp175s0f0np0 00:13:47.861 altname ens801f0np0 00:13:47.861 inet 192.168.100.8/24 scope global cvl_0_0 00:13:47.861 valid_lft forever preferred_lft forever 00:13:47.861 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:47.861 valid_lft forever preferred_lft forever 00:13:47.861 05:11:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:13:47.861 05:11:44 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:47.861 05:11:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:13:47.861 05:11:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:13:47.861 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:47.861 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:47.861 altname enp175s0f1np1 00:13:47.861 altname ens801f1np1 00:13:47.861 inet 192.168.100.9/24 scope global cvl_0_1 00:13:47.861 valid_lft forever preferred_lft forever 00:13:47.861 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:47.861 valid_lft forever preferred_lft forever 00:13:47.861 05:11:44 -- nvmf/common.sh@410 -- # return 0 00:13:47.861 05:11:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:47.861 05:11:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:47.861 05:11:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:13:47.861 05:11:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:13:47.861 05:11:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:47.861 05:11:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:13:47.861 05:11:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:13:47.861 05:11:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:47.861 05:11:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:13:47.861 05:11:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:13:47.861 05:11:44 -- nvmf/common.sh@104 -- # continue 2 00:13:47.861 05:11:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.861 05:11:44 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:47.861 05:11:44 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@104 -- # continue 2 00:13:47.861 05:11:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:13:47.861 05:11:44 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:13:47.861 05:11:44 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:47.861 05:11:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:13:47.861 05:11:44 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:13:47.861 05:11:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:13:47.861 05:11:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:13:47.861 192.168.100.9' 00:13:47.861 05:11:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:13:47.861 192.168.100.9' 00:13:47.861 05:11:44 -- nvmf/common.sh@445 -- # head -n 1 00:13:47.861 05:11:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:47.861 05:11:44 -- nvmf/common.sh@446 -- # tail -n +2 00:13:47.861 05:11:44 -- nvmf/common.sh@446 -- # head -n 1 00:13:47.861 05:11:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:47.861 192.168.100.9' 00:13:47.861 05:11:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:47.861 05:11:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:13:47.861 05:11:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:47.861 05:11:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:13:47.861 05:11:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:13:47.861 05:11:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:13:47.861 05:11:44 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:47.861 05:11:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:47.861 05:11:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.861 05:11:44 -- common/autotest_common.sh@10 -- # set +x 00:13:47.861 05:11:44 -- nvmf/common.sh@469 -- # nvmfpid=219598 00:13:47.861 05:11:44 -- nvmf/common.sh@470 -- # waitforlisten 219598 00:13:47.861 05:11:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:47.861 05:11:44 -- common/autotest_common.sh@829 -- # '[' -z 219598 ']' 00:13:47.861 05:11:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.861 05:11:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.861 05:11:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.861 05:11:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.861 05:11:44 -- common/autotest_common.sh@10 -- # set +x 00:13:47.861 [2024-11-20 05:11:44.241114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:47.861 [2024-11-20 05:11:44.241160] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.861 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.862 [2024-11-20 05:11:44.291374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:47.862 [2024-11-20 05:11:44.364581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:47.862 [2024-11-20 05:11:44.364690] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.862 [2024-11-20 05:11:44.364698] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.862 [2024-11-20 05:11:44.364704] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.862 [2024-11-20 05:11:44.364744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.862 [2024-11-20 05:11:44.364831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.862 [2024-11-20 05:11:44.364832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.430 05:11:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.430 05:11:45 -- common/autotest_common.sh@862 -- # return 0 00:13:48.430 05:11:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:48.430 05:11:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:48.430 05:11:45 -- common/autotest_common.sh@10 -- # set +x 00:13:48.430 05:11:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.430 05:11:45 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:48.430 05:11:45 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:48.689 [2024-11-20 05:11:45.280700] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1deb8d0/0x1deaf10) succeed. 00:13:48.689 [2024-11-20 05:11:45.289333] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1decbc0/0x1deb490) succeed. 00:13:48.689 [2024-11-20 05:11:45.289356] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:48.689 05:11:45 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.949 05:11:45 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:48.949 [2024-11-20 05:11:45.678909] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:48.949 05:11:45 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:49.209 05:11:45 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:49.468 Malloc0 00:13:49.469 05:11:46 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:49.469 Delay0 00:13:49.469 05:11:46 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.728 05:11:46 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:49.988 NULL1 00:13:49.988 05:11:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:50.248 05:11:46 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=220034 00:13:50.248 05:11:46 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:50.248 05:11:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:13:50.248 05:11:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.248 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.187 Read completed with error (sct=0, sc=11) 00:13:51.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.447 05:11:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.447 05:11:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:51.447 05:11:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:51.706 true 00:13:51.706 05:11:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:13:51.707 05:11:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.644 05:11:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.644 05:11:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:52.644 05:11:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:52.911 true 00:13:52.911 05:11:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:13:52.911 05:11:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.850 05:11:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.850 05:11:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:53.850 05:11:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:54.109 true 00:13:54.109 05:11:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:13:54.109 05:11:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.047 05:11:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.047 05:11:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:55.047 05:11:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:55.307 true 00:13:55.307 05:11:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:13:55.307 05:11:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.245 05:11:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.245 05:11:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:56.245 05:11:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:56.505 true 00:13:56.505 05:11:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:13:56.505 05:11:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.443 05:11:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.443 05:11:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:57.443 05:11:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:57.713 true 00:13:57.713 05:11:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:13:57.713 05:11:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.652 05:11:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.652 05:11:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:58.652 05:11:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:58.911 true 00:13:58.911 05:11:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:13:58.911 05:11:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.849 05:11:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.849 05:11:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:59.849 05:11:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:00.109 true 00:14:00.109 05:11:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:00.109 05:11:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 05:11:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.047 05:11:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:01.047 05:11:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:01.306 true 00:14:01.306 05:11:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:01.306 05:11:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.245 05:11:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.245 05:11:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:02.245 05:11:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:02.521 true 00:14:02.521 05:11:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:02.521 05:11:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.470 05:12:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.470 05:12:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:03.470 05:12:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:03.730 true 00:14:03.730 05:12:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:03.730 05:12:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.667 05:12:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.667 05:12:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:04.667 05:12:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:04.926 true 00:14:04.926 05:12:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:04.926 05:12:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.863 05:12:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.863 05:12:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:05.863 05:12:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:06.123 true 00:14:06.123 05:12:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:06.123 05:12:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.060 05:12:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.060 05:12:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:07.060 05:12:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:07.319 true 00:14:07.319 05:12:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:07.319 05:12:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 05:12:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.257 05:12:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:08.257 05:12:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:08.516 true 00:14:08.516 05:12:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:08.516 05:12:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.454 05:12:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.713 05:12:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:09.713 05:12:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:09.713 true 00:14:09.713 05:12:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:09.713 05:12:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.653 05:12:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.912 05:12:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:10.912 05:12:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:11.171 true 00:14:11.171 05:12:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:11.171 05:12:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.000 05:12:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.000 05:12:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:12.000 05:12:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:12.259 true 00:14:12.259 05:12:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:12.259 05:12:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.197 05:12:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.197 05:12:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:13.197 05:12:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:13.457 true 00:14:13.457 05:12:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:13.457 05:12:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.396 05:12:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:14.396 05:12:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:14.396 05:12:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:14.655 true 00:14:14.655 05:12:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:14.655 05:12:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.594 05:12:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.594 05:12:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:15.594 05:12:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:15.854 true 00:14:15.854 05:12:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:15.854 05:12:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.793 05:12:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.053 05:12:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:17.053 05:12:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:17.053 true 00:14:17.053 05:12:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:17.053 05:12:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.994 05:12:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.254 05:12:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:18.254 05:12:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:18.254 true 00:14:18.254 05:12:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:18.254 05:12:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.194 05:12:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.454 05:12:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:19.454 05:12:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:19.454 true 00:14:19.454 05:12:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:19.454 05:12:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.393 05:12:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.652 05:12:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:20.652 05:12:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:20.652 true 00:14:20.652 05:12:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:20.652 05:12:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.912 05:12:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.170 05:12:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:21.170 05:12:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:21.170 true 00:14:21.170 05:12:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:21.170 05:12:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.430 05:12:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.689 05:12:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:21.689 05:12:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:21.689 true 00:14:21.948 05:12:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:21.948 05:12:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.948 05:12:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.207 05:12:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:22.207 05:12:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:22.467 true 00:14:22.467 Initializing NVMe Controllers 00:14:22.467 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:22.467 Controller IO queue size 128, less than required. 00:14:22.467 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:22.467 Controller IO queue size 128, less than required. 00:14:22.467 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:22.467 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:22.467 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:22.467 Initialization complete. Launching workers. 00:14:22.467 ======================================================== 00:14:22.467 Latency(us) 00:14:22.467 Device Information : IOPS MiB/s Average min max 00:14:22.467 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6141.99 3.00 18159.01 1355.87 1137027.75 00:14:22.467 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35287.76 17.23 3627.33 1776.66 287761.40 00:14:22.467 ======================================================== 00:14:22.467 Total : 41429.75 20.23 5781.67 1355.87 1137027.75 00:14:22.467 00:14:22.467 05:12:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 220034 00:14:22.467 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (220034) - No such process 00:14:22.467 05:12:19 -- target/ns_hotplug_stress.sh@53 -- # wait 220034 00:14:22.467 05:12:19 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.467 05:12:19 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.726 05:12:19 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:22.726 05:12:19 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:22.726 05:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:22.726 05:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.726 05:12:19 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:22.985 null0 00:14:22.985 05:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:22.985 05:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.985 05:12:19 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:22.985 null1 00:14:23.244 05:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.244 05:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.244 05:12:19 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:23.244 null2 00:14:23.244 05:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.244 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.244 05:12:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:23.502 null3 00:14:23.502 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.503 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.503 05:12:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:23.762 null4 00:14:23.762 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.762 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.762 05:12:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:23.762 null5 00:14:23.762 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.762 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.762 05:12:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:24.022 null6 00:14:24.022 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.022 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.022 05:12:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:24.282 null7 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.282 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@66 -- # wait 226208 226210 226211 226213 226215 226217 226219 226221 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.283 05:12:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.543 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.802 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.802 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.802 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.802 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.802 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.802 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.802 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.802 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.062 05:12:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.322 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.322 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.322 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.322 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.322 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.322 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.322 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.322 05:12:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.322 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.582 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.582 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.582 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.582 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.582 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.582 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.582 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.582 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.842 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.842 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.842 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.843 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.102 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.102 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.102 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.102 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.102 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.102 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.103 05:12:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.363 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.363 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.363 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.363 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.363 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.363 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.363 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.363 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.623 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.883 05:12:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.143 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.143 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.143 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.143 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.143 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.143 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.143 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.143 05:12:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.402 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.661 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.662 05:12:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.921 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.921 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.921 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.921 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.922 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.922 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.922 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.922 05:12:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.181 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.181 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.181 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.181 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.181 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.181 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.181 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.181 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:28.182 05:12:24 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:28.182 05:12:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:28.182 05:12:24 -- nvmf/common.sh@116 -- # sync 00:14:28.182 05:12:24 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:28.182 05:12:24 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:28.182 05:12:24 -- nvmf/common.sh@119 -- # set +e 00:14:28.182 05:12:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:28.182 05:12:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:28.182 rmmod nvme_rdma 00:14:28.182 rmmod nvme_fabrics 00:14:28.182 05:12:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:28.182 05:12:24 -- nvmf/common.sh@123 -- # set -e 00:14:28.182 05:12:24 -- nvmf/common.sh@124 -- # return 0 00:14:28.182 05:12:24 -- nvmf/common.sh@477 -- # '[' -n 219598 ']' 00:14:28.182 05:12:24 -- nvmf/common.sh@478 -- # killprocess 219598 00:14:28.182 05:12:24 -- common/autotest_common.sh@936 -- # '[' -z 219598 ']' 00:14:28.182 05:12:24 -- common/autotest_common.sh@940 -- # kill -0 219598 00:14:28.182 05:12:24 -- common/autotest_common.sh@941 -- # uname 00:14:28.182 05:12:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.182 05:12:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 219598 00:14:28.182 05:12:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:28.182 05:12:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:28.182 05:12:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 219598' 00:14:28.182 killing process with pid 219598 00:14:28.182 05:12:24 -- common/autotest_common.sh@955 -- # kill 219598 00:14:28.182 05:12:24 -- common/autotest_common.sh@960 -- # wait 219598 00:14:28.442 05:12:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:28.442 05:12:25 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:28.442 00:14:28.442 real 0m46.231s 00:14:28.442 user 3m18.072s 00:14:28.442 sys 0m10.520s 00:14:28.442 05:12:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:28.442 05:12:25 -- common/autotest_common.sh@10 -- # set +x 00:14:28.442 ************************************ 00:14:28.442 END TEST nvmf_ns_hotplug_stress 00:14:28.442 ************************************ 00:14:28.442 05:12:25 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:28.442 05:12:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.442 05:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.442 05:12:25 -- common/autotest_common.sh@10 -- # set +x 00:14:28.442 ************************************ 00:14:28.442 START TEST nvmf_connect_stress 00:14:28.442 ************************************ 00:14:28.442 05:12:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:28.703 * Looking for test storage... 00:14:28.703 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:28.703 05:12:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:28.703 05:12:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:28.703 05:12:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:28.703 05:12:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:28.703 05:12:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:28.703 05:12:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:28.703 05:12:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:28.703 05:12:25 -- scripts/common.sh@335 -- # IFS=.-: 00:14:28.703 05:12:25 -- scripts/common.sh@335 -- # read -ra ver1 00:14:28.703 05:12:25 -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.703 05:12:25 -- scripts/common.sh@336 -- # read -ra ver2 00:14:28.703 05:12:25 -- scripts/common.sh@337 -- # local 'op=<' 00:14:28.703 05:12:25 -- scripts/common.sh@339 -- # ver1_l=2 00:14:28.703 05:12:25 -- scripts/common.sh@340 -- # ver2_l=1 00:14:28.703 05:12:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:28.703 05:12:25 -- scripts/common.sh@343 -- # case "$op" in 00:14:28.703 05:12:25 -- scripts/common.sh@344 -- # : 1 00:14:28.703 05:12:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:28.703 05:12:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.703 05:12:25 -- scripts/common.sh@364 -- # decimal 1 00:14:28.703 05:12:25 -- scripts/common.sh@352 -- # local d=1 00:14:28.703 05:12:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.703 05:12:25 -- scripts/common.sh@354 -- # echo 1 00:14:28.703 05:12:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:28.703 05:12:25 -- scripts/common.sh@365 -- # decimal 2 00:14:28.703 05:12:25 -- scripts/common.sh@352 -- # local d=2 00:14:28.703 05:12:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.703 05:12:25 -- scripts/common.sh@354 -- # echo 2 00:14:28.703 05:12:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:28.703 05:12:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:28.703 05:12:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:28.703 05:12:25 -- scripts/common.sh@367 -- # return 0 00:14:28.703 05:12:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.703 05:12:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.703 --rc genhtml_branch_coverage=1 00:14:28.703 --rc genhtml_function_coverage=1 00:14:28.703 --rc genhtml_legend=1 00:14:28.703 --rc geninfo_all_blocks=1 00:14:28.703 --rc geninfo_unexecuted_blocks=1 00:14:28.703 00:14:28.703 ' 00:14:28.703 05:12:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.703 --rc genhtml_branch_coverage=1 00:14:28.703 --rc genhtml_function_coverage=1 00:14:28.703 --rc genhtml_legend=1 00:14:28.703 --rc geninfo_all_blocks=1 00:14:28.703 --rc geninfo_unexecuted_blocks=1 00:14:28.703 00:14:28.703 ' 00:14:28.703 05:12:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.703 --rc genhtml_branch_coverage=1 00:14:28.703 --rc genhtml_function_coverage=1 00:14:28.703 --rc genhtml_legend=1 00:14:28.703 --rc geninfo_all_blocks=1 00:14:28.703 --rc geninfo_unexecuted_blocks=1 00:14:28.703 00:14:28.703 ' 00:14:28.703 05:12:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.703 --rc genhtml_branch_coverage=1 00:14:28.703 --rc genhtml_function_coverage=1 00:14:28.703 --rc genhtml_legend=1 00:14:28.703 --rc geninfo_all_blocks=1 00:14:28.703 --rc geninfo_unexecuted_blocks=1 00:14:28.703 00:14:28.703 ' 00:14:28.703 05:12:25 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.703 05:12:25 -- nvmf/common.sh@7 -- # uname -s 00:14:28.703 05:12:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.703 05:12:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.703 05:12:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.703 05:12:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.703 05:12:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.703 05:12:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.703 05:12:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.703 05:12:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.703 05:12:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.703 05:12:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.703 05:12:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:28.703 05:12:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:28.703 05:12:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.703 05:12:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.703 05:12:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:28.703 05:12:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:28.703 05:12:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.703 05:12:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.703 05:12:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.703 05:12:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.703 05:12:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.703 05:12:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.703 05:12:25 -- paths/export.sh@5 -- # export PATH 00:14:28.703 05:12:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.703 05:12:25 -- nvmf/common.sh@46 -- # : 0 00:14:28.703 05:12:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.703 05:12:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.703 05:12:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.703 05:12:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.703 05:12:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.703 05:12:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.703 05:12:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.703 05:12:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.703 05:12:25 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:28.703 05:12:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:28.703 05:12:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.703 05:12:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.703 05:12:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.703 05:12:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.703 05:12:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.703 05:12:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.703 05:12:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.703 05:12:25 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:28.703 05:12:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:28.703 05:12:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:28.703 05:12:25 -- common/autotest_common.sh@10 -- # set +x 00:14:33.983 05:12:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:33.983 05:12:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:33.983 05:12:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:33.983 05:12:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:33.983 05:12:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:33.983 05:12:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:33.983 05:12:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:33.983 05:12:30 -- nvmf/common.sh@294 -- # net_devs=() 00:14:33.983 05:12:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:33.983 05:12:30 -- nvmf/common.sh@295 -- # e810=() 00:14:33.983 05:12:30 -- nvmf/common.sh@295 -- # local -ga e810 00:14:33.983 05:12:30 -- nvmf/common.sh@296 -- # x722=() 00:14:33.983 05:12:30 -- nvmf/common.sh@296 -- # local -ga x722 00:14:33.983 05:12:30 -- nvmf/common.sh@297 -- # mlx=() 00:14:33.983 05:12:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:33.983 05:12:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.983 05:12:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:33.983 05:12:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:33.983 05:12:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:33.983 05:12:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:33.983 05:12:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:33.983 05:12:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:33.983 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:33.983 05:12:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.983 05:12:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:33.983 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:33.983 05:12:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.983 05:12:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:33.983 05:12:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:14:33.983 05:12:30 -- nvmf/common.sh@376 -- # modinfo irdma 00:14:33.983 05:12:30 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:14:33.983 05:12:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.983 05:12:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:33.983 05:12:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.983 05:12:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:33.983 Found net devices under 0000:af:00.0: cvl_0_0 00:14:33.983 05:12:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.983 05:12:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.983 05:12:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:33.983 05:12:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.983 05:12:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:33.983 Found net devices under 0000:af:00.1: cvl_0_1 00:14:33.983 05:12:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.983 05:12:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:33.983 05:12:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:33.983 05:12:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:33.983 05:12:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:33.983 05:12:30 -- nvmf/common.sh@57 -- # uname 00:14:33.983 05:12:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:33.983 05:12:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:33.983 05:12:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:33.983 05:12:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:33.983 05:12:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:33.983 05:12:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:33.983 05:12:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:33.983 05:12:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:33.983 05:12:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:33.983 05:12:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:33.983 05:12:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:33.983 05:12:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:33.983 05:12:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:33.983 05:12:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:33.983 05:12:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:33.983 05:12:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:33.983 05:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:14:33.983 05:12:30 -- nvmf/common.sh@104 -- # continue 2 00:14:33.983 05:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.983 05:12:30 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:14:33.983 05:12:30 -- nvmf/common.sh@104 -- # continue 2 00:14:33.983 05:12:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:33.983 05:12:30 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:14:33.983 05:12:30 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:14:33.983 05:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:14:33.983 05:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:33.983 05:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:33.983 05:12:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:33.983 05:12:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:14:33.983 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:33.983 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:33.983 altname enp175s0f0np0 00:14:33.983 altname ens801f0np0 00:14:33.983 inet 192.168.100.8/24 scope global cvl_0_0 00:14:33.983 valid_lft forever preferred_lft forever 00:14:33.983 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:33.983 valid_lft forever preferred_lft forever 00:14:33.983 05:12:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:33.983 05:12:30 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:14:33.983 05:12:30 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:14:33.983 05:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:14:33.983 05:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:33.983 05:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:33.983 05:12:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:33.983 05:12:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:33.983 05:12:30 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:14:33.983 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:33.983 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:33.983 altname enp175s0f1np1 00:14:33.983 altname ens801f1np1 00:14:33.983 inet 192.168.100.9/24 scope global cvl_0_1 00:14:33.983 valid_lft forever preferred_lft forever 00:14:33.984 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:33.984 valid_lft forever preferred_lft forever 00:14:33.984 05:12:30 -- nvmf/common.sh@410 -- # return 0 00:14:33.984 05:12:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:33.984 05:12:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:33.984 05:12:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:33.984 05:12:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:33.984 05:12:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:33.984 05:12:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:33.984 05:12:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:33.984 05:12:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:33.984 05:12:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:33.984 05:12:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:33.984 05:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:33.984 05:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.984 05:12:30 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:33.984 05:12:30 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:14:33.984 05:12:30 -- nvmf/common.sh@104 -- # continue 2 00:14:33.984 05:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:33.984 05:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.984 05:12:30 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:33.984 05:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.984 05:12:30 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:33.984 05:12:30 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:14:33.984 05:12:30 -- nvmf/common.sh@104 -- # continue 2 00:14:33.984 05:12:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:33.984 05:12:30 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:14:33.984 05:12:30 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:14:33.984 05:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:14:33.984 05:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:33.984 05:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:33.984 05:12:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:33.984 05:12:30 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:14:33.984 05:12:30 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:14:33.984 05:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:14:33.984 05:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:33.984 05:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:33.984 05:12:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:33.984 192.168.100.9' 00:14:33.984 05:12:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:33.984 192.168.100.9' 00:14:33.984 05:12:30 -- nvmf/common.sh@445 -- # head -n 1 00:14:33.984 05:12:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:33.984 05:12:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:33.984 192.168.100.9' 00:14:33.984 05:12:30 -- nvmf/common.sh@446 -- # tail -n +2 00:14:33.984 05:12:30 -- nvmf/common.sh@446 -- # head -n 1 00:14:33.984 05:12:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:33.984 05:12:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:33.984 05:12:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:33.984 05:12:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:33.984 05:12:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:33.984 05:12:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:33.984 05:12:30 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:33.984 05:12:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:33.984 05:12:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.984 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:14:33.984 05:12:30 -- nvmf/common.sh@469 -- # nvmfpid=230116 00:14:33.984 05:12:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:33.984 05:12:30 -- nvmf/common.sh@470 -- # waitforlisten 230116 00:14:33.984 05:12:30 -- common/autotest_common.sh@829 -- # '[' -z 230116 ']' 00:14:33.984 05:12:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.984 05:12:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.984 05:12:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.984 05:12:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.984 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:14:33.984 [2024-11-20 05:12:30.706055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:33.984 [2024-11-20 05:12:30.706099] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.984 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.984 [2024-11-20 05:12:30.761722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:34.244 [2024-11-20 05:12:30.836556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:34.244 [2024-11-20 05:12:30.836667] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.244 [2024-11-20 05:12:30.836675] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.244 [2024-11-20 05:12:30.836681] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.244 [2024-11-20 05:12:30.836714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.244 [2024-11-20 05:12:30.836732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.244 [2024-11-20 05:12:30.836734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.814 05:12:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.814 05:12:31 -- common/autotest_common.sh@862 -- # return 0 00:14:34.814 05:12:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:34.814 05:12:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.814 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:14:34.814 05:12:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.814 05:12:31 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:34.814 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.814 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:14:34.814 [2024-11-20 05:12:31.584005] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x8ac8d0/0x8abf10) succeed. 00:14:34.814 [2024-11-20 05:12:31.592891] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x8adbc0/0x8ac490) succeed. 00:14:34.814 [2024-11-20 05:12:31.592911] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:34.814 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.814 05:12:31 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:34.814 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.814 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:14:34.814 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.814 05:12:31 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:34.814 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.814 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:14:34.814 [2024-11-20 05:12:31.617146] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:34.814 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.814 05:12:31 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:34.814 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.814 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:14:34.814 NULL1 00:14:34.814 05:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.814 05:12:31 -- target/connect_stress.sh@21 -- # PERF_PID=230359 00:14:34.814 05:12:31 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:34.814 05:12:31 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:34.814 05:12:31 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:34.814 05:12:31 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:35.075 05:12:31 -- target/connect_stress.sh@28 -- # cat 00:14:35.075 05:12:31 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:35.075 05:12:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.075 05:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.075 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:14:35.335 05:12:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.335 05:12:32 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:35.335 05:12:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.335 05:12:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.335 05:12:32 -- common/autotest_common.sh@10 -- # set +x 00:14:35.595 05:12:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.595 05:12:32 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:35.595 05:12:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.595 05:12:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.595 05:12:32 -- common/autotest_common.sh@10 -- # set +x 00:14:36.164 05:12:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.164 05:12:32 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:36.164 05:12:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.164 05:12:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.164 05:12:32 -- common/autotest_common.sh@10 -- # set +x 00:14:36.424 05:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.424 05:12:33 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:36.424 05:12:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.424 05:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.424 05:12:33 -- common/autotest_common.sh@10 -- # set +x 00:14:36.684 05:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.684 05:12:33 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:36.684 05:12:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.684 05:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.684 05:12:33 -- common/autotest_common.sh@10 -- # set +x 00:14:36.944 05:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.944 05:12:33 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:36.944 05:12:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.944 05:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.944 05:12:33 -- common/autotest_common.sh@10 -- # set +x 00:14:37.204 05:12:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.204 05:12:34 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:37.204 05:12:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.204 05:12:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.204 05:12:34 -- common/autotest_common.sh@10 -- # set +x 00:14:37.772 05:12:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.772 05:12:34 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:37.772 05:12:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.772 05:12:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.772 05:12:34 -- common/autotest_common.sh@10 -- # set +x 00:14:38.032 05:12:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.032 05:12:34 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:38.032 05:12:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.032 05:12:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.032 05:12:34 -- common/autotest_common.sh@10 -- # set +x 00:14:38.291 05:12:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.291 05:12:34 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:38.291 05:12:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.291 05:12:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.291 05:12:34 -- common/autotest_common.sh@10 -- # set +x 00:14:38.552 05:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.552 05:12:35 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:38.552 05:12:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.552 05:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.552 05:12:35 -- common/autotest_common.sh@10 -- # set +x 00:14:39.122 05:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.122 05:12:35 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:39.122 05:12:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.122 05:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.122 05:12:35 -- common/autotest_common.sh@10 -- # set +x 00:14:39.381 05:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.381 05:12:35 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:39.381 05:12:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.381 05:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.381 05:12:35 -- common/autotest_common.sh@10 -- # set +x 00:14:39.641 05:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.641 05:12:36 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:39.641 05:12:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.641 05:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.641 05:12:36 -- common/autotest_common.sh@10 -- # set +x 00:14:39.900 05:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.900 05:12:36 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:39.900 05:12:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.900 05:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.900 05:12:36 -- common/autotest_common.sh@10 -- # set +x 00:14:40.160 05:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.160 05:12:36 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:40.160 05:12:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.160 05:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.160 05:12:36 -- common/autotest_common.sh@10 -- # set +x 00:14:40.728 05:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.728 05:12:37 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:40.728 05:12:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.728 05:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.728 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:14:40.987 05:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.987 05:12:37 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:40.987 05:12:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.987 05:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.987 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:14:41.246 05:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.246 05:12:37 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:41.246 05:12:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.246 05:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.246 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:14:41.505 05:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.505 05:12:38 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:41.505 05:12:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.505 05:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.505 05:12:38 -- common/autotest_common.sh@10 -- # set +x 00:14:41.764 05:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.765 05:12:38 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:41.765 05:12:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.765 05:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.765 05:12:38 -- common/autotest_common.sh@10 -- # set +x 00:14:42.334 05:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.334 05:12:38 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:42.334 05:12:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.334 05:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.334 05:12:38 -- common/autotest_common.sh@10 -- # set +x 00:14:42.592 05:12:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.592 05:12:39 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:42.592 05:12:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.592 05:12:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.592 05:12:39 -- common/autotest_common.sh@10 -- # set +x 00:14:42.851 05:12:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.851 05:12:39 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:42.851 05:12:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.851 05:12:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.851 05:12:39 -- common/autotest_common.sh@10 -- # set +x 00:14:43.111 05:12:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.111 05:12:39 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:43.111 05:12:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.111 05:12:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.111 05:12:39 -- common/autotest_common.sh@10 -- # set +x 00:14:43.681 05:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.681 05:12:40 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:43.681 05:12:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.681 05:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.681 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:14:43.940 05:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.940 05:12:40 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:43.940 05:12:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.940 05:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.940 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:14:44.200 05:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.200 05:12:40 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:44.200 05:12:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.200 05:12:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.200 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:14:44.460 05:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.460 05:12:41 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:44.460 05:12:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.460 05:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.460 05:12:41 -- common/autotest_common.sh@10 -- # set +x 00:14:44.719 05:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.719 05:12:41 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:44.719 05:12:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.719 05:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.719 05:12:41 -- common/autotest_common.sh@10 -- # set +x 00:14:45.289 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.289 05:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.289 05:12:41 -- target/connect_stress.sh@34 -- # kill -0 230359 00:14:45.289 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (230359) - No such process 00:14:45.289 05:12:41 -- target/connect_stress.sh@38 -- # wait 230359 00:14:45.289 05:12:41 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:45.289 05:12:41 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:45.289 05:12:41 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:45.289 05:12:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:45.289 05:12:41 -- nvmf/common.sh@116 -- # sync 00:14:45.289 05:12:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:45.289 05:12:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:45.289 05:12:41 -- nvmf/common.sh@119 -- # set +e 00:14:45.289 05:12:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:45.289 05:12:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:45.289 rmmod nvme_rdma 00:14:45.289 rmmod nvme_fabrics 00:14:45.289 05:12:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:45.289 05:12:41 -- nvmf/common.sh@123 -- # set -e 00:14:45.289 05:12:41 -- nvmf/common.sh@124 -- # return 0 00:14:45.289 05:12:41 -- nvmf/common.sh@477 -- # '[' -n 230116 ']' 00:14:45.289 05:12:41 -- nvmf/common.sh@478 -- # killprocess 230116 00:14:45.289 05:12:41 -- common/autotest_common.sh@936 -- # '[' -z 230116 ']' 00:14:45.289 05:12:41 -- common/autotest_common.sh@940 -- # kill -0 230116 00:14:45.289 05:12:41 -- common/autotest_common.sh@941 -- # uname 00:14:45.289 05:12:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.289 05:12:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 230116 00:14:45.289 05:12:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:45.289 05:12:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:45.289 05:12:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 230116' 00:14:45.289 killing process with pid 230116 00:14:45.289 05:12:41 -- common/autotest_common.sh@955 -- # kill 230116 00:14:45.289 05:12:41 -- common/autotest_common.sh@960 -- # wait 230116 00:14:45.548 05:12:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:45.548 05:12:42 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:45.548 00:14:45.548 real 0m16.958s 00:14:45.548 user 0m41.803s 00:14:45.548 sys 0m7.899s 00:14:45.548 05:12:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:45.548 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:14:45.548 ************************************ 00:14:45.548 END TEST nvmf_connect_stress 00:14:45.548 ************************************ 00:14:45.548 05:12:42 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:14:45.548 05:12:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.548 05:12:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.548 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:14:45.548 ************************************ 00:14:45.548 START TEST nvmf_fused_ordering 00:14:45.548 ************************************ 00:14:45.548 05:12:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:14:45.548 * Looking for test storage... 00:14:45.548 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:45.548 05:12:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:45.548 05:12:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:45.548 05:12:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:45.809 05:12:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:45.809 05:12:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:45.809 05:12:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:45.809 05:12:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:45.809 05:12:42 -- scripts/common.sh@335 -- # IFS=.-: 00:14:45.809 05:12:42 -- scripts/common.sh@335 -- # read -ra ver1 00:14:45.809 05:12:42 -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.809 05:12:42 -- scripts/common.sh@336 -- # read -ra ver2 00:14:45.809 05:12:42 -- scripts/common.sh@337 -- # local 'op=<' 00:14:45.809 05:12:42 -- scripts/common.sh@339 -- # ver1_l=2 00:14:45.809 05:12:42 -- scripts/common.sh@340 -- # ver2_l=1 00:14:45.809 05:12:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:45.809 05:12:42 -- scripts/common.sh@343 -- # case "$op" in 00:14:45.809 05:12:42 -- scripts/common.sh@344 -- # : 1 00:14:45.809 05:12:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:45.809 05:12:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.809 05:12:42 -- scripts/common.sh@364 -- # decimal 1 00:14:45.809 05:12:42 -- scripts/common.sh@352 -- # local d=1 00:14:45.809 05:12:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.809 05:12:42 -- scripts/common.sh@354 -- # echo 1 00:14:45.809 05:12:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:45.809 05:12:42 -- scripts/common.sh@365 -- # decimal 2 00:14:45.809 05:12:42 -- scripts/common.sh@352 -- # local d=2 00:14:45.809 05:12:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.809 05:12:42 -- scripts/common.sh@354 -- # echo 2 00:14:45.809 05:12:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:45.809 05:12:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:45.809 05:12:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:45.809 05:12:42 -- scripts/common.sh@367 -- # return 0 00:14:45.809 05:12:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.809 05:12:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:45.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.809 --rc genhtml_branch_coverage=1 00:14:45.809 --rc genhtml_function_coverage=1 00:14:45.809 --rc genhtml_legend=1 00:14:45.809 --rc geninfo_all_blocks=1 00:14:45.809 --rc geninfo_unexecuted_blocks=1 00:14:45.809 00:14:45.809 ' 00:14:45.809 05:12:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:45.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.809 --rc genhtml_branch_coverage=1 00:14:45.809 --rc genhtml_function_coverage=1 00:14:45.809 --rc genhtml_legend=1 00:14:45.809 --rc geninfo_all_blocks=1 00:14:45.809 --rc geninfo_unexecuted_blocks=1 00:14:45.809 00:14:45.809 ' 00:14:45.809 05:12:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:45.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.809 --rc genhtml_branch_coverage=1 00:14:45.809 --rc genhtml_function_coverage=1 00:14:45.809 --rc genhtml_legend=1 00:14:45.809 --rc geninfo_all_blocks=1 00:14:45.809 --rc geninfo_unexecuted_blocks=1 00:14:45.809 00:14:45.809 ' 00:14:45.809 05:12:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:45.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.809 --rc genhtml_branch_coverage=1 00:14:45.809 --rc genhtml_function_coverage=1 00:14:45.809 --rc genhtml_legend=1 00:14:45.809 --rc geninfo_all_blocks=1 00:14:45.809 --rc geninfo_unexecuted_blocks=1 00:14:45.809 00:14:45.809 ' 00:14:45.809 05:12:42 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.809 05:12:42 -- nvmf/common.sh@7 -- # uname -s 00:14:45.809 05:12:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.809 05:12:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.809 05:12:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.809 05:12:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.809 05:12:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.809 05:12:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.809 05:12:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.809 05:12:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.809 05:12:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.809 05:12:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.809 05:12:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:45.809 05:12:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:45.809 05:12:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.809 05:12:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.809 05:12:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:45.809 05:12:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:45.809 05:12:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.809 05:12:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.809 05:12:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.809 05:12:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.809 05:12:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.809 05:12:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.809 05:12:42 -- paths/export.sh@5 -- # export PATH 00:14:45.809 05:12:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.809 05:12:42 -- nvmf/common.sh@46 -- # : 0 00:14:45.809 05:12:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:45.809 05:12:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:45.809 05:12:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:45.809 05:12:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.809 05:12:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.809 05:12:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:45.809 05:12:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:45.809 05:12:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:45.809 05:12:42 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:45.809 05:12:42 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:45.809 05:12:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.809 05:12:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:45.809 05:12:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:45.809 05:12:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:45.809 05:12:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.809 05:12:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.809 05:12:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.809 05:12:42 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:45.809 05:12:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:45.809 05:12:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:45.809 05:12:42 -- common/autotest_common.sh@10 -- # set +x 00:14:51.090 05:12:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:51.090 05:12:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:51.090 05:12:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:51.090 05:12:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:51.090 05:12:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:51.090 05:12:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:51.090 05:12:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:51.090 05:12:47 -- nvmf/common.sh@294 -- # net_devs=() 00:14:51.090 05:12:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:51.090 05:12:47 -- nvmf/common.sh@295 -- # e810=() 00:14:51.090 05:12:47 -- nvmf/common.sh@295 -- # local -ga e810 00:14:51.090 05:12:47 -- nvmf/common.sh@296 -- # x722=() 00:14:51.090 05:12:47 -- nvmf/common.sh@296 -- # local -ga x722 00:14:51.090 05:12:47 -- nvmf/common.sh@297 -- # mlx=() 00:14:51.090 05:12:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:51.090 05:12:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.090 05:12:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:51.090 05:12:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:51.090 05:12:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:51.090 05:12:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:51.090 05:12:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:51.090 05:12:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:51.090 05:12:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:51.090 05:12:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:51.090 05:12:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:51.090 05:12:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:51.090 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:51.090 05:12:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:51.090 05:12:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:51.090 05:12:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.090 05:12:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.090 05:12:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:51.090 05:12:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:51.090 05:12:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:51.090 05:12:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:51.090 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:51.090 05:12:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:51.091 05:12:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:51.091 05:12:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:14:51.091 05:12:47 -- nvmf/common.sh@376 -- # modinfo irdma 00:14:51.091 05:12:47 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:14:51.091 05:12:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.091 05:12:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:51.091 05:12:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.091 05:12:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:51.091 Found net devices under 0000:af:00.0: cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.091 05:12:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.091 05:12:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:51.091 05:12:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.091 05:12:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:51.091 Found net devices under 0000:af:00.1: cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.091 05:12:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:51.091 05:12:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:51.091 05:12:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:51.091 05:12:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:51.091 05:12:47 -- nvmf/common.sh@57 -- # uname 00:14:51.091 05:12:47 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:51.091 05:12:47 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:51.091 05:12:47 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:51.091 05:12:47 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:51.091 05:12:47 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:51.091 05:12:47 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:51.091 05:12:47 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:51.091 05:12:47 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:51.091 05:12:47 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:51.091 05:12:47 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:51.091 05:12:47 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:51.091 05:12:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:51.091 05:12:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:51.091 05:12:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:51.091 05:12:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:51.091 05:12:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:51.091 05:12:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@104 -- # continue 2 00:14:51.091 05:12:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@104 -- # continue 2 00:14:51.091 05:12:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:51.091 05:12:47 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:51.091 05:12:47 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:51.091 05:12:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:14:51.091 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:51.091 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:51.091 altname enp175s0f0np0 00:14:51.091 altname ens801f0np0 00:14:51.091 inet 192.168.100.8/24 scope global cvl_0_0 00:14:51.091 valid_lft forever preferred_lft forever 00:14:51.091 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:51.091 valid_lft forever preferred_lft forever 00:14:51.091 05:12:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:51.091 05:12:47 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:51.091 05:12:47 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:51.091 05:12:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:14:51.091 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:51.091 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:51.091 altname enp175s0f1np1 00:14:51.091 altname ens801f1np1 00:14:51.091 inet 192.168.100.9/24 scope global cvl_0_1 00:14:51.091 valid_lft forever preferred_lft forever 00:14:51.091 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:51.091 valid_lft forever preferred_lft forever 00:14:51.091 05:12:47 -- nvmf/common.sh@410 -- # return 0 00:14:51.091 05:12:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:51.091 05:12:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:51.091 05:12:47 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:51.091 05:12:47 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:51.091 05:12:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:51.091 05:12:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:51.091 05:12:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:51.091 05:12:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:51.091 05:12:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:51.091 05:12:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@104 -- # continue 2 00:14:51.091 05:12:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:51.091 05:12:47 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:51.091 05:12:47 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@104 -- # continue 2 00:14:51.091 05:12:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:51.091 05:12:47 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:51.091 05:12:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:51.091 05:12:47 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:51.091 05:12:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:51.091 05:12:47 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:51.091 192.168.100.9' 00:14:51.091 05:12:47 -- nvmf/common.sh@445 -- # head -n 1 00:14:51.091 05:12:47 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:51.091 192.168.100.9' 00:14:51.091 05:12:47 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:51.091 05:12:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:51.091 192.168.100.9' 00:14:51.091 05:12:47 -- nvmf/common.sh@446 -- # tail -n +2 00:14:51.091 05:12:47 -- nvmf/common.sh@446 -- # head -n 1 00:14:51.091 05:12:47 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:51.091 05:12:47 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:51.091 05:12:47 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:51.091 05:12:47 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:51.091 05:12:47 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:51.091 05:12:47 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:51.091 05:12:47 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:51.091 05:12:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:51.091 05:12:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.091 05:12:47 -- common/autotest_common.sh@10 -- # set +x 00:14:51.091 05:12:47 -- nvmf/common.sh@469 -- # nvmfpid=235163 00:14:51.091 05:12:47 -- nvmf/common.sh@470 -- # waitforlisten 235163 00:14:51.091 05:12:47 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:51.091 05:12:47 -- common/autotest_common.sh@829 -- # '[' -z 235163 ']' 00:14:51.091 05:12:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.091 05:12:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.091 05:12:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.092 05:12:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.092 05:12:47 -- common/autotest_common.sh@10 -- # set +x 00:14:51.352 [2024-11-20 05:12:47.951407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:51.352 [2024-11-20 05:12:47.951454] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.352 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.352 [2024-11-20 05:12:48.007669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.352 [2024-11-20 05:12:48.080776] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:51.352 [2024-11-20 05:12:48.080880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.352 [2024-11-20 05:12:48.080888] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.352 [2024-11-20 05:12:48.080894] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.352 [2024-11-20 05:12:48.080914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.291 05:12:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.291 05:12:48 -- common/autotest_common.sh@862 -- # return 0 00:14:52.291 05:12:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:52.291 05:12:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:52.291 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:14:52.291 05:12:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.291 05:12:48 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:52.291 05:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.291 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:14:52.291 [2024-11-20 05:12:48.831681] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x14882a0/0x14878e0) succeed. 00:14:52.291 [2024-11-20 05:12:48.840317] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1489550/0x1487e60) succeed. 00:14:52.291 [2024-11-20 05:12:48.840345] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:52.291 05:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.291 05:12:48 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.291 05:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.291 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:14:52.291 05:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.291 05:12:48 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:52.291 05:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.291 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:14:52.291 [2024-11-20 05:12:48.857710] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:52.291 05:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.291 05:12:48 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:52.291 05:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.291 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:14:52.291 NULL1 00:14:52.291 05:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.291 05:12:48 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:52.291 05:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.291 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:14:52.291 05:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.291 05:12:48 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:52.291 05:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.291 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:14:52.291 05:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.291 05:12:48 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:52.291 [2024-11-20 05:12:48.911452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:52.291 [2024-11-20 05:12:48.911496] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235297 ] 00:14:52.291 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.291 Attached to nqn.2016-06.io.spdk:cnode1 00:14:52.291 Namespace ID: 1 size: 1GB 00:14:52.291 fused_ordering(0) 00:14:52.291 fused_ordering(1) 00:14:52.291 fused_ordering(2) 00:14:52.291 fused_ordering(3) 00:14:52.291 fused_ordering(4) 00:14:52.291 fused_ordering(5) 00:14:52.291 fused_ordering(6) 00:14:52.291 fused_ordering(7) 00:14:52.291 fused_ordering(8) 00:14:52.291 fused_ordering(9) 00:14:52.291 fused_ordering(10) 00:14:52.291 fused_ordering(11) 00:14:52.291 fused_ordering(12) 00:14:52.291 fused_ordering(13) 00:14:52.291 fused_ordering(14) 00:14:52.291 fused_ordering(15) 00:14:52.291 fused_ordering(16) 00:14:52.291 fused_ordering(17) 00:14:52.291 fused_ordering(18) 00:14:52.291 fused_ordering(19) 00:14:52.291 fused_ordering(20) 00:14:52.291 fused_ordering(21) 00:14:52.291 fused_ordering(22) 00:14:52.291 fused_ordering(23) 00:14:52.291 fused_ordering(24) 00:14:52.291 fused_ordering(25) 00:14:52.291 fused_ordering(26) 00:14:52.291 fused_ordering(27) 00:14:52.291 fused_ordering(28) 00:14:52.291 fused_ordering(29) 00:14:52.291 fused_ordering(30) 00:14:52.291 fused_ordering(31) 00:14:52.291 fused_ordering(32) 00:14:52.291 fused_ordering(33) 00:14:52.291 fused_ordering(34) 00:14:52.291 fused_ordering(35) 00:14:52.291 fused_ordering(36) 00:14:52.291 fused_ordering(37) 00:14:52.291 fused_ordering(38) 00:14:52.291 fused_ordering(39) 00:14:52.291 fused_ordering(40) 00:14:52.291 fused_ordering(41) 00:14:52.291 fused_ordering(42) 00:14:52.291 fused_ordering(43) 00:14:52.291 fused_ordering(44) 00:14:52.291 fused_ordering(45) 00:14:52.291 fused_ordering(46) 00:14:52.291 fused_ordering(47) 00:14:52.291 fused_ordering(48) 00:14:52.291 fused_ordering(49) 00:14:52.291 fused_ordering(50) 00:14:52.291 fused_ordering(51) 00:14:52.291 fused_ordering(52) 00:14:52.291 fused_ordering(53) 00:14:52.291 fused_ordering(54) 00:14:52.291 fused_ordering(55) 00:14:52.291 fused_ordering(56) 00:14:52.291 fused_ordering(57) 00:14:52.291 fused_ordering(58) 00:14:52.291 fused_ordering(59) 00:14:52.292 fused_ordering(60) 00:14:52.292 fused_ordering(61) 00:14:52.292 fused_ordering(62) 00:14:52.292 fused_ordering(63) 00:14:52.292 fused_ordering(64) 00:14:52.292 fused_ordering(65) 00:14:52.292 fused_ordering(66) 00:14:52.292 fused_ordering(67) 00:14:52.292 fused_ordering(68) 00:14:52.292 fused_ordering(69) 00:14:52.292 fused_ordering(70) 00:14:52.292 fused_ordering(71) 00:14:52.292 fused_ordering(72) 00:14:52.292 fused_ordering(73) 00:14:52.292 fused_ordering(74) 00:14:52.292 fused_ordering(75) 00:14:52.292 fused_ordering(76) 00:14:52.292 fused_ordering(77) 00:14:52.292 fused_ordering(78) 00:14:52.292 fused_ordering(79) 00:14:52.292 fused_ordering(80) 00:14:52.292 fused_ordering(81) 00:14:52.292 fused_ordering(82) 00:14:52.292 fused_ordering(83) 00:14:52.292 fused_ordering(84) 00:14:52.292 fused_ordering(85) 00:14:52.292 fused_ordering(86) 00:14:52.292 fused_ordering(87) 00:14:52.292 fused_ordering(88) 00:14:52.292 fused_ordering(89) 00:14:52.292 fused_ordering(90) 00:14:52.292 fused_ordering(91) 00:14:52.292 fused_ordering(92) 00:14:52.292 fused_ordering(93) 00:14:52.292 fused_ordering(94) 00:14:52.292 fused_ordering(95) 00:14:52.292 fused_ordering(96) 00:14:52.292 fused_ordering(97) 00:14:52.292 fused_ordering(98) 00:14:52.292 fused_ordering(99) 00:14:52.292 fused_ordering(100) 00:14:52.292 fused_ordering(101) 00:14:52.292 fused_ordering(102) 00:14:52.292 fused_ordering(103) 00:14:52.292 fused_ordering(104) 00:14:52.292 fused_ordering(105) 00:14:52.292 fused_ordering(106) 00:14:52.292 fused_ordering(107) 00:14:52.292 fused_ordering(108) 00:14:52.292 fused_ordering(109) 00:14:52.292 fused_ordering(110) 00:14:52.292 fused_ordering(111) 00:14:52.292 fused_ordering(112) 00:14:52.292 fused_ordering(113) 00:14:52.292 fused_ordering(114) 00:14:52.292 fused_ordering(115) 00:14:52.292 fused_ordering(116) 00:14:52.292 fused_ordering(117) 00:14:52.292 fused_ordering(118) 00:14:52.292 fused_ordering(119) 00:14:52.292 fused_ordering(120) 00:14:52.292 fused_ordering(121) 00:14:52.292 fused_ordering(122) 00:14:52.292 fused_ordering(123) 00:14:52.292 fused_ordering(124) 00:14:52.292 fused_ordering(125) 00:14:52.292 fused_ordering(126) 00:14:52.292 fused_ordering(127) 00:14:52.292 fused_ordering(128) 00:14:52.292 fused_ordering(129) 00:14:52.292 fused_ordering(130) 00:14:52.292 fused_ordering(131) 00:14:52.292 fused_ordering(132) 00:14:52.292 fused_ordering(133) 00:14:52.292 fused_ordering(134) 00:14:52.292 fused_ordering(135) 00:14:52.292 fused_ordering(136) 00:14:52.292 fused_ordering(137) 00:14:52.292 fused_ordering(138) 00:14:52.292 fused_ordering(139) 00:14:52.292 fused_ordering(140) 00:14:52.292 fused_ordering(141) 00:14:52.292 fused_ordering(142) 00:14:52.292 fused_ordering(143) 00:14:52.292 fused_ordering(144) 00:14:52.292 fused_ordering(145) 00:14:52.292 fused_ordering(146) 00:14:52.292 fused_ordering(147) 00:14:52.292 fused_ordering(148) 00:14:52.292 fused_ordering(149) 00:14:52.292 fused_ordering(150) 00:14:52.292 fused_ordering(151) 00:14:52.292 fused_ordering(152) 00:14:52.292 fused_ordering(153) 00:14:52.292 fused_ordering(154) 00:14:52.292 fused_ordering(155) 00:14:52.292 fused_ordering(156) 00:14:52.292 fused_ordering(157) 00:14:52.292 fused_ordering(158) 00:14:52.292 fused_ordering(159) 00:14:52.292 fused_ordering(160) 00:14:52.292 fused_ordering(161) 00:14:52.292 fused_ordering(162) 00:14:52.292 fused_ordering(163) 00:14:52.292 fused_ordering(164) 00:14:52.292 fused_ordering(165) 00:14:52.292 fused_ordering(166) 00:14:52.292 fused_ordering(167) 00:14:52.292 fused_ordering(168) 00:14:52.292 fused_ordering(169) 00:14:52.292 fused_ordering(170) 00:14:52.292 fused_ordering(171) 00:14:52.292 fused_ordering(172) 00:14:52.292 fused_ordering(173) 00:14:52.292 fused_ordering(174) 00:14:52.292 fused_ordering(175) 00:14:52.292 fused_ordering(176) 00:14:52.292 fused_ordering(177) 00:14:52.292 fused_ordering(178) 00:14:52.292 fused_ordering(179) 00:14:52.292 fused_ordering(180) 00:14:52.292 fused_ordering(181) 00:14:52.292 fused_ordering(182) 00:14:52.292 fused_ordering(183) 00:14:52.292 fused_ordering(184) 00:14:52.292 fused_ordering(185) 00:14:52.292 fused_ordering(186) 00:14:52.292 fused_ordering(187) 00:14:52.292 fused_ordering(188) 00:14:52.292 fused_ordering(189) 00:14:52.292 fused_ordering(190) 00:14:52.292 fused_ordering(191) 00:14:52.292 fused_ordering(192) 00:14:52.292 fused_ordering(193) 00:14:52.292 fused_ordering(194) 00:14:52.292 fused_ordering(195) 00:14:52.292 fused_ordering(196) 00:14:52.292 fused_ordering(197) 00:14:52.292 fused_ordering(198) 00:14:52.292 fused_ordering(199) 00:14:52.292 fused_ordering(200) 00:14:52.292 fused_ordering(201) 00:14:52.292 fused_ordering(202) 00:14:52.292 fused_ordering(203) 00:14:52.292 fused_ordering(204) 00:14:52.292 fused_ordering(205) 00:14:52.551 fused_ordering(206) 00:14:52.551 fused_ordering(207) 00:14:52.551 fused_ordering(208) 00:14:52.551 fused_ordering(209) 00:14:52.551 fused_ordering(210) 00:14:52.551 fused_ordering(211) 00:14:52.551 fused_ordering(212) 00:14:52.551 fused_ordering(213) 00:14:52.551 fused_ordering(214) 00:14:52.551 fused_ordering(215) 00:14:52.551 fused_ordering(216) 00:14:52.551 fused_ordering(217) 00:14:52.551 fused_ordering(218) 00:14:52.551 fused_ordering(219) 00:14:52.551 fused_ordering(220) 00:14:52.551 fused_ordering(221) 00:14:52.551 fused_ordering(222) 00:14:52.551 fused_ordering(223) 00:14:52.551 fused_ordering(224) 00:14:52.551 fused_ordering(225) 00:14:52.551 fused_ordering(226) 00:14:52.551 fused_ordering(227) 00:14:52.551 fused_ordering(228) 00:14:52.551 fused_ordering(229) 00:14:52.551 fused_ordering(230) 00:14:52.551 fused_ordering(231) 00:14:52.551 fused_ordering(232) 00:14:52.551 fused_ordering(233) 00:14:52.551 fused_ordering(234) 00:14:52.551 fused_ordering(235) 00:14:52.551 fused_ordering(236) 00:14:52.551 fused_ordering(237) 00:14:52.551 fused_ordering(238) 00:14:52.551 fused_ordering(239) 00:14:52.551 fused_ordering(240) 00:14:52.551 fused_ordering(241) 00:14:52.551 fused_ordering(242) 00:14:52.551 fused_ordering(243) 00:14:52.551 fused_ordering(244) 00:14:52.551 fused_ordering(245) 00:14:52.551 fused_ordering(246) 00:14:52.551 fused_ordering(247) 00:14:52.551 fused_ordering(248) 00:14:52.551 fused_ordering(249) 00:14:52.551 fused_ordering(250) 00:14:52.551 fused_ordering(251) 00:14:52.551 fused_ordering(252) 00:14:52.551 fused_ordering(253) 00:14:52.551 fused_ordering(254) 00:14:52.551 fused_ordering(255) 00:14:52.551 fused_ordering(256) 00:14:52.551 fused_ordering(257) 00:14:52.551 fused_ordering(258) 00:14:52.551 fused_ordering(259) 00:14:52.551 fused_ordering(260) 00:14:52.551 fused_ordering(261) 00:14:52.551 fused_ordering(262) 00:14:52.551 fused_ordering(263) 00:14:52.551 fused_ordering(264) 00:14:52.551 fused_ordering(265) 00:14:52.551 fused_ordering(266) 00:14:52.551 fused_ordering(267) 00:14:52.551 fused_ordering(268) 00:14:52.551 fused_ordering(269) 00:14:52.551 fused_ordering(270) 00:14:52.551 fused_ordering(271) 00:14:52.551 fused_ordering(272) 00:14:52.551 fused_ordering(273) 00:14:52.551 fused_ordering(274) 00:14:52.551 fused_ordering(275) 00:14:52.551 fused_ordering(276) 00:14:52.551 fused_ordering(277) 00:14:52.551 fused_ordering(278) 00:14:52.551 fused_ordering(279) 00:14:52.551 fused_ordering(280) 00:14:52.551 fused_ordering(281) 00:14:52.551 fused_ordering(282) 00:14:52.551 fused_ordering(283) 00:14:52.551 fused_ordering(284) 00:14:52.551 fused_ordering(285) 00:14:52.551 fused_ordering(286) 00:14:52.551 fused_ordering(287) 00:14:52.551 fused_ordering(288) 00:14:52.551 fused_ordering(289) 00:14:52.551 fused_ordering(290) 00:14:52.551 fused_ordering(291) 00:14:52.551 fused_ordering(292) 00:14:52.551 fused_ordering(293) 00:14:52.551 fused_ordering(294) 00:14:52.551 fused_ordering(295) 00:14:52.551 fused_ordering(296) 00:14:52.551 fused_ordering(297) 00:14:52.551 fused_ordering(298) 00:14:52.551 fused_ordering(299) 00:14:52.551 fused_ordering(300) 00:14:52.551 fused_ordering(301) 00:14:52.551 fused_ordering(302) 00:14:52.551 fused_ordering(303) 00:14:52.551 fused_ordering(304) 00:14:52.551 fused_ordering(305) 00:14:52.551 fused_ordering(306) 00:14:52.551 fused_ordering(307) 00:14:52.551 fused_ordering(308) 00:14:52.551 fused_ordering(309) 00:14:52.551 fused_ordering(310) 00:14:52.551 fused_ordering(311) 00:14:52.551 fused_ordering(312) 00:14:52.551 fused_ordering(313) 00:14:52.551 fused_ordering(314) 00:14:52.551 fused_ordering(315) 00:14:52.551 fused_ordering(316) 00:14:52.551 fused_ordering(317) 00:14:52.551 fused_ordering(318) 00:14:52.551 fused_ordering(319) 00:14:52.551 fused_ordering(320) 00:14:52.552 fused_ordering(321) 00:14:52.552 fused_ordering(322) 00:14:52.552 fused_ordering(323) 00:14:52.552 fused_ordering(324) 00:14:52.552 fused_ordering(325) 00:14:52.552 fused_ordering(326) 00:14:52.552 fused_ordering(327) 00:14:52.552 fused_ordering(328) 00:14:52.552 fused_ordering(329) 00:14:52.552 fused_ordering(330) 00:14:52.552 fused_ordering(331) 00:14:52.552 fused_ordering(332) 00:14:52.552 fused_ordering(333) 00:14:52.552 fused_ordering(334) 00:14:52.552 fused_ordering(335) 00:14:52.552 fused_ordering(336) 00:14:52.552 fused_ordering(337) 00:14:52.552 fused_ordering(338) 00:14:52.552 fused_ordering(339) 00:14:52.552 fused_ordering(340) 00:14:52.552 fused_ordering(341) 00:14:52.552 fused_ordering(342) 00:14:52.552 fused_ordering(343) 00:14:52.552 fused_ordering(344) 00:14:52.552 fused_ordering(345) 00:14:52.552 fused_ordering(346) 00:14:52.552 fused_ordering(347) 00:14:52.552 fused_ordering(348) 00:14:52.552 fused_ordering(349) 00:14:52.552 fused_ordering(350) 00:14:52.552 fused_ordering(351) 00:14:52.552 fused_ordering(352) 00:14:52.552 fused_ordering(353) 00:14:52.552 fused_ordering(354) 00:14:52.552 fused_ordering(355) 00:14:52.552 fused_ordering(356) 00:14:52.552 fused_ordering(357) 00:14:52.552 fused_ordering(358) 00:14:52.552 fused_ordering(359) 00:14:52.552 fused_ordering(360) 00:14:52.552 fused_ordering(361) 00:14:52.552 fused_ordering(362) 00:14:52.552 fused_ordering(363) 00:14:52.552 fused_ordering(364) 00:14:52.552 fused_ordering(365) 00:14:52.552 fused_ordering(366) 00:14:52.552 fused_ordering(367) 00:14:52.552 fused_ordering(368) 00:14:52.552 fused_ordering(369) 00:14:52.552 fused_ordering(370) 00:14:52.552 fused_ordering(371) 00:14:52.552 fused_ordering(372) 00:14:52.552 fused_ordering(373) 00:14:52.552 fused_ordering(374) 00:14:52.552 fused_ordering(375) 00:14:52.552 fused_ordering(376) 00:14:52.552 fused_ordering(377) 00:14:52.552 fused_ordering(378) 00:14:52.552 fused_ordering(379) 00:14:52.552 fused_ordering(380) 00:14:52.552 fused_ordering(381) 00:14:52.552 fused_ordering(382) 00:14:52.552 fused_ordering(383) 00:14:52.552 fused_ordering(384) 00:14:52.552 fused_ordering(385) 00:14:52.552 fused_ordering(386) 00:14:52.552 fused_ordering(387) 00:14:52.552 fused_ordering(388) 00:14:52.552 fused_ordering(389) 00:14:52.552 fused_ordering(390) 00:14:52.552 fused_ordering(391) 00:14:52.552 fused_ordering(392) 00:14:52.552 fused_ordering(393) 00:14:52.552 fused_ordering(394) 00:14:52.552 fused_ordering(395) 00:14:52.552 fused_ordering(396) 00:14:52.552 fused_ordering(397) 00:14:52.552 fused_ordering(398) 00:14:52.552 fused_ordering(399) 00:14:52.552 fused_ordering(400) 00:14:52.552 fused_ordering(401) 00:14:52.552 fused_ordering(402) 00:14:52.552 fused_ordering(403) 00:14:52.552 fused_ordering(404) 00:14:52.552 fused_ordering(405) 00:14:52.552 fused_ordering(406) 00:14:52.552 fused_ordering(407) 00:14:52.552 fused_ordering(408) 00:14:52.552 fused_ordering(409) 00:14:52.552 fused_ordering(410) 00:14:52.552 fused_ordering(411) 00:14:52.552 fused_ordering(412) 00:14:52.552 fused_ordering(413) 00:14:52.552 fused_ordering(414) 00:14:52.552 fused_ordering(415) 00:14:52.552 fused_ordering(416) 00:14:52.552 fused_ordering(417) 00:14:52.552 fused_ordering(418) 00:14:52.552 fused_ordering(419) 00:14:52.552 fused_ordering(420) 00:14:52.552 fused_ordering(421) 00:14:52.552 fused_ordering(422) 00:14:52.552 fused_ordering(423) 00:14:52.552 fused_ordering(424) 00:14:52.552 fused_ordering(425) 00:14:52.552 fused_ordering(426) 00:14:52.552 fused_ordering(427) 00:14:52.552 fused_ordering(428) 00:14:52.552 fused_ordering(429) 00:14:52.552 fused_ordering(430) 00:14:52.552 fused_ordering(431) 00:14:52.552 fused_ordering(432) 00:14:52.552 fused_ordering(433) 00:14:52.552 fused_ordering(434) 00:14:52.552 fused_ordering(435) 00:14:52.552 fused_ordering(436) 00:14:52.552 fused_ordering(437) 00:14:52.552 fused_ordering(438) 00:14:52.552 fused_ordering(439) 00:14:52.552 fused_ordering(440) 00:14:52.552 fused_ordering(441) 00:14:52.552 fused_ordering(442) 00:14:52.552 fused_ordering(443) 00:14:52.552 fused_ordering(444) 00:14:52.552 fused_ordering(445) 00:14:52.552 fused_ordering(446) 00:14:52.552 fused_ordering(447) 00:14:52.552 fused_ordering(448) 00:14:52.552 fused_ordering(449) 00:14:52.552 fused_ordering(450) 00:14:52.552 fused_ordering(451) 00:14:52.552 fused_ordering(452) 00:14:52.552 fused_ordering(453) 00:14:52.552 fused_ordering(454) 00:14:52.552 fused_ordering(455) 00:14:52.552 fused_ordering(456) 00:14:52.552 fused_ordering(457) 00:14:52.552 fused_ordering(458) 00:14:52.552 fused_ordering(459) 00:14:52.552 fused_ordering(460) 00:14:52.552 fused_ordering(461) 00:14:52.552 fused_ordering(462) 00:14:52.552 fused_ordering(463) 00:14:52.552 fused_ordering(464) 00:14:52.552 fused_ordering(465) 00:14:52.552 fused_ordering(466) 00:14:52.552 fused_ordering(467) 00:14:52.552 fused_ordering(468) 00:14:52.552 fused_ordering(469) 00:14:52.552 fused_ordering(470) 00:14:52.552 fused_ordering(471) 00:14:52.552 fused_ordering(472) 00:14:52.552 fused_ordering(473) 00:14:52.552 fused_ordering(474) 00:14:52.552 fused_ordering(475) 00:14:52.552 fused_ordering(476) 00:14:52.552 fused_ordering(477) 00:14:52.552 fused_ordering(478) 00:14:52.552 fused_ordering(479) 00:14:52.552 fused_ordering(480) 00:14:52.552 fused_ordering(481) 00:14:52.552 fused_ordering(482) 00:14:52.552 fused_ordering(483) 00:14:52.552 fused_ordering(484) 00:14:52.552 fused_ordering(485) 00:14:52.552 fused_ordering(486) 00:14:52.552 fused_ordering(487) 00:14:52.552 fused_ordering(488) 00:14:52.552 fused_ordering(489) 00:14:52.552 fused_ordering(490) 00:14:52.552 fused_ordering(491) 00:14:52.552 fused_ordering(492) 00:14:52.552 fused_ordering(493) 00:14:52.552 fused_ordering(494) 00:14:52.552 fused_ordering(495) 00:14:52.552 fused_ordering(496) 00:14:52.552 fused_ordering(497) 00:14:52.552 fused_ordering(498) 00:14:52.552 fused_ordering(499) 00:14:52.552 fused_ordering(500) 00:14:52.552 fused_ordering(501) 00:14:52.552 fused_ordering(502) 00:14:52.552 fused_ordering(503) 00:14:52.552 fused_ordering(504) 00:14:52.552 fused_ordering(505) 00:14:52.552 fused_ordering(506) 00:14:52.552 fused_ordering(507) 00:14:52.552 fused_ordering(508) 00:14:52.552 fused_ordering(509) 00:14:52.552 fused_ordering(510) 00:14:52.552 fused_ordering(511) 00:14:52.552 fused_ordering(512) 00:14:52.552 fused_ordering(513) 00:14:52.552 fused_ordering(514) 00:14:52.552 fused_ordering(515) 00:14:52.552 fused_ordering(516) 00:14:52.552 fused_ordering(517) 00:14:52.552 fused_ordering(518) 00:14:52.552 fused_ordering(519) 00:14:52.552 fused_ordering(520) 00:14:52.552 fused_ordering(521) 00:14:52.552 fused_ordering(522) 00:14:52.552 fused_ordering(523) 00:14:52.552 fused_ordering(524) 00:14:52.552 fused_ordering(525) 00:14:52.552 fused_ordering(526) 00:14:52.552 fused_ordering(527) 00:14:52.552 fused_ordering(528) 00:14:52.552 fused_ordering(529) 00:14:52.552 fused_ordering(530) 00:14:52.552 fused_ordering(531) 00:14:52.552 fused_ordering(532) 00:14:52.552 fused_ordering(533) 00:14:52.552 fused_ordering(534) 00:14:52.552 fused_ordering(535) 00:14:52.552 fused_ordering(536) 00:14:52.552 fused_ordering(537) 00:14:52.552 fused_ordering(538) 00:14:52.552 fused_ordering(539) 00:14:52.552 fused_ordering(540) 00:14:52.552 fused_ordering(541) 00:14:52.552 fused_ordering(542) 00:14:52.552 fused_ordering(543) 00:14:52.552 fused_ordering(544) 00:14:52.552 fused_ordering(545) 00:14:52.552 fused_ordering(546) 00:14:52.552 fused_ordering(547) 00:14:52.552 fused_ordering(548) 00:14:52.552 fused_ordering(549) 00:14:52.552 fused_ordering(550) 00:14:52.552 fused_ordering(551) 00:14:52.552 fused_ordering(552) 00:14:52.552 fused_ordering(553) 00:14:52.552 fused_ordering(554) 00:14:52.552 fused_ordering(555) 00:14:52.552 fused_ordering(556) 00:14:52.552 fused_ordering(557) 00:14:52.552 fused_ordering(558) 00:14:52.552 fused_ordering(559) 00:14:52.552 fused_ordering(560) 00:14:52.552 fused_ordering(561) 00:14:52.552 fused_ordering(562) 00:14:52.552 fused_ordering(563) 00:14:52.552 fused_ordering(564) 00:14:52.552 fused_ordering(565) 00:14:52.552 fused_ordering(566) 00:14:52.552 fused_ordering(567) 00:14:52.552 fused_ordering(568) 00:14:52.552 fused_ordering(569) 00:14:52.552 fused_ordering(570) 00:14:52.552 fused_ordering(571) 00:14:52.552 fused_ordering(572) 00:14:52.552 fused_ordering(573) 00:14:52.552 fused_ordering(574) 00:14:52.552 fused_ordering(575) 00:14:52.552 fused_ordering(576) 00:14:52.552 fused_ordering(577) 00:14:52.552 fused_ordering(578) 00:14:52.552 fused_ordering(579) 00:14:52.552 fused_ordering(580) 00:14:52.552 fused_ordering(581) 00:14:52.552 fused_ordering(582) 00:14:52.552 fused_ordering(583) 00:14:52.552 fused_ordering(584) 00:14:52.552 fused_ordering(585) 00:14:52.552 fused_ordering(586) 00:14:52.552 fused_ordering(587) 00:14:52.552 fused_ordering(588) 00:14:52.552 fused_ordering(589) 00:14:52.552 fused_ordering(590) 00:14:52.552 fused_ordering(591) 00:14:52.552 fused_ordering(592) 00:14:52.552 fused_ordering(593) 00:14:52.552 fused_ordering(594) 00:14:52.552 fused_ordering(595) 00:14:52.552 fused_ordering(596) 00:14:52.552 fused_ordering(597) 00:14:52.552 fused_ordering(598) 00:14:52.552 fused_ordering(599) 00:14:52.552 fused_ordering(600) 00:14:52.552 fused_ordering(601) 00:14:52.552 fused_ordering(602) 00:14:52.552 fused_ordering(603) 00:14:52.552 fused_ordering(604) 00:14:52.552 fused_ordering(605) 00:14:52.552 fused_ordering(606) 00:14:52.552 fused_ordering(607) 00:14:52.552 fused_ordering(608) 00:14:52.552 fused_ordering(609) 00:14:52.552 fused_ordering(610) 00:14:52.552 fused_ordering(611) 00:14:52.552 fused_ordering(612) 00:14:52.552 fused_ordering(613) 00:14:52.552 fused_ordering(614) 00:14:52.552 fused_ordering(615) 00:14:52.552 fused_ordering(616) 00:14:52.552 fused_ordering(617) 00:14:52.552 fused_ordering(618) 00:14:52.552 fused_ordering(619) 00:14:52.552 fused_ordering(620) 00:14:52.552 fused_ordering(621) 00:14:52.552 fused_ordering(622) 00:14:52.552 fused_ordering(623) 00:14:52.552 fused_ordering(624) 00:14:52.552 fused_ordering(625) 00:14:52.552 fused_ordering(626) 00:14:52.552 fused_ordering(627) 00:14:52.552 fused_ordering(628) 00:14:52.552 fused_ordering(629) 00:14:52.552 fused_ordering(630) 00:14:52.552 fused_ordering(631) 00:14:52.552 fused_ordering(632) 00:14:52.552 fused_ordering(633) 00:14:52.552 fused_ordering(634) 00:14:52.552 fused_ordering(635) 00:14:52.552 fused_ordering(636) 00:14:52.552 fused_ordering(637) 00:14:52.552 fused_ordering(638) 00:14:52.552 fused_ordering(639) 00:14:52.552 fused_ordering(640) 00:14:52.552 fused_ordering(641) 00:14:52.552 fused_ordering(642) 00:14:52.552 fused_ordering(643) 00:14:52.552 fused_ordering(644) 00:14:52.552 fused_ordering(645) 00:14:52.552 fused_ordering(646) 00:14:52.552 fused_ordering(647) 00:14:52.552 fused_ordering(648) 00:14:52.552 fused_ordering(649) 00:14:52.552 fused_ordering(650) 00:14:52.552 fused_ordering(651) 00:14:52.552 fused_ordering(652) 00:14:52.552 fused_ordering(653) 00:14:52.552 fused_ordering(654) 00:14:52.552 fused_ordering(655) 00:14:52.552 fused_ordering(656) 00:14:52.552 fused_ordering(657) 00:14:52.552 fused_ordering(658) 00:14:52.552 fused_ordering(659) 00:14:52.552 fused_ordering(660) 00:14:52.552 fused_ordering(661) 00:14:52.552 fused_ordering(662) 00:14:52.552 fused_ordering(663) 00:14:52.552 fused_ordering(664) 00:14:52.552 fused_ordering(665) 00:14:52.552 fused_ordering(666) 00:14:52.552 fused_ordering(667) 00:14:52.552 fused_ordering(668) 00:14:52.552 fused_ordering(669) 00:14:52.552 fused_ordering(670) 00:14:52.552 fused_ordering(671) 00:14:52.552 fused_ordering(672) 00:14:52.552 fused_ordering(673) 00:14:52.552 fused_ordering(674) 00:14:52.552 fused_ordering(675) 00:14:52.552 fused_ordering(676) 00:14:52.552 fused_ordering(677) 00:14:52.552 fused_ordering(678) 00:14:52.552 fused_ordering(679) 00:14:52.552 fused_ordering(680) 00:14:52.552 fused_ordering(681) 00:14:52.552 fused_ordering(682) 00:14:52.552 fused_ordering(683) 00:14:52.552 fused_ordering(684) 00:14:52.552 fused_ordering(685) 00:14:52.552 fused_ordering(686) 00:14:52.552 fused_ordering(687) 00:14:52.552 fused_ordering(688) 00:14:52.552 fused_ordering(689) 00:14:52.552 fused_ordering(690) 00:14:52.552 fused_ordering(691) 00:14:52.552 fused_ordering(692) 00:14:52.552 fused_ordering(693) 00:14:52.552 fused_ordering(694) 00:14:52.552 fused_ordering(695) 00:14:52.552 fused_ordering(696) 00:14:52.552 fused_ordering(697) 00:14:52.552 fused_ordering(698) 00:14:52.552 fused_ordering(699) 00:14:52.552 fused_ordering(700) 00:14:52.552 fused_ordering(701) 00:14:52.552 fused_ordering(702) 00:14:52.552 fused_ordering(703) 00:14:52.552 fused_ordering(704) 00:14:52.552 fused_ordering(705) 00:14:52.552 fused_ordering(706) 00:14:52.552 fused_ordering(707) 00:14:52.552 fused_ordering(708) 00:14:52.552 fused_ordering(709) 00:14:52.552 fused_ordering(710) 00:14:52.552 fused_ordering(711) 00:14:52.552 fused_ordering(712) 00:14:52.552 fused_ordering(713) 00:14:52.552 fused_ordering(714) 00:14:52.552 fused_ordering(715) 00:14:52.552 fused_ordering(716) 00:14:52.552 fused_ordering(717) 00:14:52.552 fused_ordering(718) 00:14:52.552 fused_ordering(719) 00:14:52.552 fused_ordering(720) 00:14:52.552 fused_ordering(721) 00:14:52.552 fused_ordering(722) 00:14:52.552 fused_ordering(723) 00:14:52.552 fused_ordering(724) 00:14:52.552 fused_ordering(725) 00:14:52.552 fused_ordering(726) 00:14:52.552 fused_ordering(727) 00:14:52.552 fused_ordering(728) 00:14:52.552 fused_ordering(729) 00:14:52.552 fused_ordering(730) 00:14:52.552 fused_ordering(731) 00:14:52.552 fused_ordering(732) 00:14:52.552 fused_ordering(733) 00:14:52.552 fused_ordering(734) 00:14:52.552 fused_ordering(735) 00:14:52.552 fused_ordering(736) 00:14:52.552 fused_ordering(737) 00:14:52.552 fused_ordering(738) 00:14:52.552 fused_ordering(739) 00:14:52.552 fused_ordering(740) 00:14:52.552 fused_ordering(741) 00:14:52.552 fused_ordering(742) 00:14:52.552 fused_ordering(743) 00:14:52.552 fused_ordering(744) 00:14:52.552 fused_ordering(745) 00:14:52.552 fused_ordering(746) 00:14:52.552 fused_ordering(747) 00:14:52.552 fused_ordering(748) 00:14:52.552 fused_ordering(749) 00:14:52.552 fused_ordering(750) 00:14:52.552 fused_ordering(751) 00:14:52.552 fused_ordering(752) 00:14:52.552 fused_ordering(753) 00:14:52.552 fused_ordering(754) 00:14:52.552 fused_ordering(755) 00:14:52.552 fused_ordering(756) 00:14:52.552 fused_ordering(757) 00:14:52.552 fused_ordering(758) 00:14:52.552 fused_ordering(759) 00:14:52.552 fused_ordering(760) 00:14:52.552 fused_ordering(761) 00:14:52.552 fused_ordering(762) 00:14:52.552 fused_ordering(763) 00:14:52.552 fused_ordering(764) 00:14:52.552 fused_ordering(765) 00:14:52.552 fused_ordering(766) 00:14:52.552 fused_ordering(767) 00:14:52.552 fused_ordering(768) 00:14:52.552 fused_ordering(769) 00:14:52.553 fused_ordering(770) 00:14:52.553 fused_ordering(771) 00:14:52.553 fused_ordering(772) 00:14:52.553 fused_ordering(773) 00:14:52.553 fused_ordering(774) 00:14:52.553 fused_ordering(775) 00:14:52.553 fused_ordering(776) 00:14:52.553 fused_ordering(777) 00:14:52.553 fused_ordering(778) 00:14:52.553 fused_ordering(779) 00:14:52.553 fused_ordering(780) 00:14:52.553 fused_ordering(781) 00:14:52.553 fused_ordering(782) 00:14:52.553 fused_ordering(783) 00:14:52.553 fused_ordering(784) 00:14:52.553 fused_ordering(785) 00:14:52.553 fused_ordering(786) 00:14:52.553 fused_ordering(787) 00:14:52.553 fused_ordering(788) 00:14:52.553 fused_ordering(789) 00:14:52.553 fused_ordering(790) 00:14:52.553 fused_ordering(791) 00:14:52.553 fused_ordering(792) 00:14:52.553 fused_ordering(793) 00:14:52.553 fused_ordering(794) 00:14:52.553 fused_ordering(795) 00:14:52.553 fused_ordering(796) 00:14:52.553 fused_ordering(797) 00:14:52.553 fused_ordering(798) 00:14:52.553 fused_ordering(799) 00:14:52.553 fused_ordering(800) 00:14:52.553 fused_ordering(801) 00:14:52.553 fused_ordering(802) 00:14:52.553 fused_ordering(803) 00:14:52.553 fused_ordering(804) 00:14:52.553 fused_ordering(805) 00:14:52.553 fused_ordering(806) 00:14:52.553 fused_ordering(807) 00:14:52.553 fused_ordering(808) 00:14:52.553 fused_ordering(809) 00:14:52.553 fused_ordering(810) 00:14:52.553 fused_ordering(811) 00:14:52.553 fused_ordering(812) 00:14:52.553 fused_ordering(813) 00:14:52.553 fused_ordering(814) 00:14:52.553 fused_ordering(815) 00:14:52.553 fused_ordering(816) 00:14:52.553 fused_ordering(817) 00:14:52.553 fused_ordering(818) 00:14:52.553 fused_ordering(819) 00:14:52.553 fused_ordering(820) 00:14:52.812 fused_ordering(821) 00:14:52.812 fused_ordering(822) 00:14:52.812 fused_ordering(823) 00:14:52.812 fused_ordering(824) 00:14:52.812 fused_ordering(825) 00:14:52.812 fused_ordering(826) 00:14:52.812 fused_ordering(827) 00:14:52.812 fused_ordering(828) 00:14:52.812 fused_ordering(829) 00:14:52.812 fused_ordering(830) 00:14:52.812 fused_ordering(831) 00:14:52.812 fused_ordering(832) 00:14:52.812 fused_ordering(833) 00:14:52.812 fused_ordering(834) 00:14:52.812 fused_ordering(835) 00:14:52.812 fused_ordering(836) 00:14:52.812 fused_ordering(837) 00:14:52.812 fused_ordering(838) 00:14:52.812 fused_ordering(839) 00:14:52.812 fused_ordering(840) 00:14:52.812 fused_ordering(841) 00:14:52.812 fused_ordering(842) 00:14:52.812 fused_ordering(843) 00:14:52.812 fused_ordering(844) 00:14:52.812 fused_ordering(845) 00:14:52.812 fused_ordering(846) 00:14:52.812 fused_ordering(847) 00:14:52.812 fused_ordering(848) 00:14:52.812 fused_ordering(849) 00:14:52.812 fused_ordering(850) 00:14:52.812 fused_ordering(851) 00:14:52.812 fused_ordering(852) 00:14:52.812 fused_ordering(853) 00:14:52.812 fused_ordering(854) 00:14:52.812 fused_ordering(855) 00:14:52.812 fused_ordering(856) 00:14:52.812 fused_ordering(857) 00:14:52.812 fused_ordering(858) 00:14:52.812 fused_ordering(859) 00:14:52.812 fused_ordering(860) 00:14:52.812 fused_ordering(861) 00:14:52.812 fused_ordering(862) 00:14:52.812 fused_ordering(863) 00:14:52.812 fused_ordering(864) 00:14:52.812 fused_ordering(865) 00:14:52.812 fused_ordering(866) 00:14:52.812 fused_ordering(867) 00:14:52.812 fused_ordering(868) 00:14:52.812 fused_ordering(869) 00:14:52.812 fused_ordering(870) 00:14:52.812 fused_ordering(871) 00:14:52.812 fused_ordering(872) 00:14:52.812 fused_ordering(873) 00:14:52.812 fused_ordering(874) 00:14:52.812 fused_ordering(875) 00:14:52.812 fused_ordering(876) 00:14:52.812 fused_ordering(877) 00:14:52.812 fused_ordering(878) 00:14:52.812 fused_ordering(879) 00:14:52.812 fused_ordering(880) 00:14:52.812 fused_ordering(881) 00:14:52.812 fused_ordering(882) 00:14:52.812 fused_ordering(883) 00:14:52.812 fused_ordering(884) 00:14:52.812 fused_ordering(885) 00:14:52.812 fused_ordering(886) 00:14:52.812 fused_ordering(887) 00:14:52.812 fused_ordering(888) 00:14:52.812 fused_ordering(889) 00:14:52.812 fused_ordering(890) 00:14:52.812 fused_ordering(891) 00:14:52.812 fused_ordering(892) 00:14:52.812 fused_ordering(893) 00:14:52.812 fused_ordering(894) 00:14:52.812 fused_ordering(895) 00:14:52.812 fused_ordering(896) 00:14:52.812 fused_ordering(897) 00:14:52.812 fused_ordering(898) 00:14:52.812 fused_ordering(899) 00:14:52.812 fused_ordering(900) 00:14:52.812 fused_ordering(901) 00:14:52.812 fused_ordering(902) 00:14:52.812 fused_ordering(903) 00:14:52.812 fused_ordering(904) 00:14:52.812 fused_ordering(905) 00:14:52.812 fused_ordering(906) 00:14:52.812 fused_ordering(907) 00:14:52.812 fused_ordering(908) 00:14:52.812 fused_ordering(909) 00:14:52.812 fused_ordering(910) 00:14:52.812 fused_ordering(911) 00:14:52.812 fused_ordering(912) 00:14:52.812 fused_ordering(913) 00:14:52.812 fused_ordering(914) 00:14:52.812 fused_ordering(915) 00:14:52.812 fused_ordering(916) 00:14:52.812 fused_ordering(917) 00:14:52.812 fused_ordering(918) 00:14:52.812 fused_ordering(919) 00:14:52.812 fused_ordering(920) 00:14:52.812 fused_ordering(921) 00:14:52.812 fused_ordering(922) 00:14:52.812 fused_ordering(923) 00:14:52.812 fused_ordering(924) 00:14:52.812 fused_ordering(925) 00:14:52.812 fused_ordering(926) 00:14:52.812 fused_ordering(927) 00:14:52.812 fused_ordering(928) 00:14:52.812 fused_ordering(929) 00:14:52.812 fused_ordering(930) 00:14:52.812 fused_ordering(931) 00:14:52.812 fused_ordering(932) 00:14:52.812 fused_ordering(933) 00:14:52.812 fused_ordering(934) 00:14:52.812 fused_ordering(935) 00:14:52.812 fused_ordering(936) 00:14:52.812 fused_ordering(937) 00:14:52.812 fused_ordering(938) 00:14:52.812 fused_ordering(939) 00:14:52.812 fused_ordering(940) 00:14:52.812 fused_ordering(941) 00:14:52.812 fused_ordering(942) 00:14:52.812 fused_ordering(943) 00:14:52.812 fused_ordering(944) 00:14:52.812 fused_ordering(945) 00:14:52.812 fused_ordering(946) 00:14:52.812 fused_ordering(947) 00:14:52.812 fused_ordering(948) 00:14:52.812 fused_ordering(949) 00:14:52.812 fused_ordering(950) 00:14:52.812 fused_ordering(951) 00:14:52.812 fused_ordering(952) 00:14:52.812 fused_ordering(953) 00:14:52.812 fused_ordering(954) 00:14:52.812 fused_ordering(955) 00:14:52.812 fused_ordering(956) 00:14:52.812 fused_ordering(957) 00:14:52.812 fused_ordering(958) 00:14:52.812 fused_ordering(959) 00:14:52.812 fused_ordering(960) 00:14:52.812 fused_ordering(961) 00:14:52.812 fused_ordering(962) 00:14:52.812 fused_ordering(963) 00:14:52.812 fused_ordering(964) 00:14:52.812 fused_ordering(965) 00:14:52.812 fused_ordering(966) 00:14:52.812 fused_ordering(967) 00:14:52.812 fused_ordering(968) 00:14:52.812 fused_ordering(969) 00:14:52.812 fused_ordering(970) 00:14:52.812 fused_ordering(971) 00:14:52.812 fused_ordering(972) 00:14:52.812 fused_ordering(973) 00:14:52.812 fused_ordering(974) 00:14:52.812 fused_ordering(975) 00:14:52.812 fused_ordering(976) 00:14:52.812 fused_ordering(977) 00:14:52.812 fused_ordering(978) 00:14:52.812 fused_ordering(979) 00:14:52.812 fused_ordering(980) 00:14:52.812 fused_ordering(981) 00:14:52.812 fused_ordering(982) 00:14:52.812 fused_ordering(983) 00:14:52.812 fused_ordering(984) 00:14:52.812 fused_ordering(985) 00:14:52.812 fused_ordering(986) 00:14:52.812 fused_ordering(987) 00:14:52.812 fused_ordering(988) 00:14:52.812 fused_ordering(989) 00:14:52.812 fused_ordering(990) 00:14:52.812 fused_ordering(991) 00:14:52.812 fused_ordering(992) 00:14:52.812 fused_ordering(993) 00:14:52.812 fused_ordering(994) 00:14:52.812 fused_ordering(995) 00:14:52.812 fused_ordering(996) 00:14:52.812 fused_ordering(997) 00:14:52.812 fused_ordering(998) 00:14:52.812 fused_ordering(999) 00:14:52.812 fused_ordering(1000) 00:14:52.812 fused_ordering(1001) 00:14:52.812 fused_ordering(1002) 00:14:52.812 fused_ordering(1003) 00:14:52.812 fused_ordering(1004) 00:14:52.812 fused_ordering(1005) 00:14:52.812 fused_ordering(1006) 00:14:52.812 fused_ordering(1007) 00:14:52.812 fused_ordering(1008) 00:14:52.812 fused_ordering(1009) 00:14:52.812 fused_ordering(1010) 00:14:52.812 fused_ordering(1011) 00:14:52.812 fused_ordering(1012) 00:14:52.812 fused_ordering(1013) 00:14:52.812 fused_ordering(1014) 00:14:52.812 fused_ordering(1015) 00:14:52.812 fused_ordering(1016) 00:14:52.812 fused_ordering(1017) 00:14:52.812 fused_ordering(1018) 00:14:52.812 fused_ordering(1019) 00:14:52.812 fused_ordering(1020) 00:14:52.812 fused_ordering(1021) 00:14:52.812 fused_ordering(1022) 00:14:52.812 fused_ordering(1023) 00:14:52.812 05:12:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:52.812 05:12:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:52.812 05:12:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.812 05:12:49 -- nvmf/common.sh@116 -- # sync 00:14:52.812 05:12:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:52.812 05:12:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:52.812 05:12:49 -- nvmf/common.sh@119 -- # set +e 00:14:52.812 05:12:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.812 05:12:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:52.812 rmmod nvme_rdma 00:14:52.812 rmmod nvme_fabrics 00:14:52.812 05:12:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.812 05:12:49 -- nvmf/common.sh@123 -- # set -e 00:14:52.812 05:12:49 -- nvmf/common.sh@124 -- # return 0 00:14:52.812 05:12:49 -- nvmf/common.sh@477 -- # '[' -n 235163 ']' 00:14:52.812 05:12:49 -- nvmf/common.sh@478 -- # killprocess 235163 00:14:52.812 05:12:49 -- common/autotest_common.sh@936 -- # '[' -z 235163 ']' 00:14:52.812 05:12:49 -- common/autotest_common.sh@940 -- # kill -0 235163 00:14:52.812 05:12:49 -- common/autotest_common.sh@941 -- # uname 00:14:52.812 05:12:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.812 05:12:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 235163 00:14:53.072 05:12:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:53.072 05:12:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:53.072 05:12:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 235163' 00:14:53.072 killing process with pid 235163 00:14:53.072 05:12:49 -- common/autotest_common.sh@955 -- # kill 235163 00:14:53.072 05:12:49 -- common/autotest_common.sh@960 -- # wait 235163 00:14:53.072 05:12:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.072 05:12:49 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:53.072 00:14:53.072 real 0m7.653s 00:14:53.072 user 0m4.578s 00:14:53.072 sys 0m4.370s 00:14:53.072 05:12:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:53.072 05:12:49 -- common/autotest_common.sh@10 -- # set +x 00:14:53.072 ************************************ 00:14:53.072 END TEST nvmf_fused_ordering 00:14:53.072 ************************************ 00:14:53.333 05:12:49 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:14:53.333 05:12:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:53.333 05:12:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.333 05:12:49 -- common/autotest_common.sh@10 -- # set +x 00:14:53.333 ************************************ 00:14:53.333 START TEST nvmf_delete_subsystem 00:14:53.333 ************************************ 00:14:53.333 05:12:49 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:14:53.333 * Looking for test storage... 00:14:53.333 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:53.333 05:12:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:53.333 05:12:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:53.333 05:12:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.333 05:12:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.333 05:12:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.333 05:12:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.333 05:12:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.333 05:12:50 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.333 05:12:50 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.333 05:12:50 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.333 05:12:50 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.333 05:12:50 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.333 05:12:50 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.333 05:12:50 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.333 05:12:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.333 05:12:50 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.333 05:12:50 -- scripts/common.sh@344 -- # : 1 00:14:53.333 05:12:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.333 05:12:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.333 05:12:50 -- scripts/common.sh@364 -- # decimal 1 00:14:53.333 05:12:50 -- scripts/common.sh@352 -- # local d=1 00:14:53.333 05:12:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.333 05:12:50 -- scripts/common.sh@354 -- # echo 1 00:14:53.333 05:12:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.333 05:12:50 -- scripts/common.sh@365 -- # decimal 2 00:14:53.333 05:12:50 -- scripts/common.sh@352 -- # local d=2 00:14:53.333 05:12:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.333 05:12:50 -- scripts/common.sh@354 -- # echo 2 00:14:53.333 05:12:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.333 05:12:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.333 05:12:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.333 05:12:50 -- scripts/common.sh@367 -- # return 0 00:14:53.333 05:12:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.333 05:12:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.333 --rc genhtml_branch_coverage=1 00:14:53.333 --rc genhtml_function_coverage=1 00:14:53.333 --rc genhtml_legend=1 00:14:53.333 --rc geninfo_all_blocks=1 00:14:53.333 --rc geninfo_unexecuted_blocks=1 00:14:53.333 00:14:53.333 ' 00:14:53.333 05:12:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.333 --rc genhtml_branch_coverage=1 00:14:53.333 --rc genhtml_function_coverage=1 00:14:53.333 --rc genhtml_legend=1 00:14:53.333 --rc geninfo_all_blocks=1 00:14:53.333 --rc geninfo_unexecuted_blocks=1 00:14:53.333 00:14:53.333 ' 00:14:53.333 05:12:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.333 --rc genhtml_branch_coverage=1 00:14:53.333 --rc genhtml_function_coverage=1 00:14:53.333 --rc genhtml_legend=1 00:14:53.333 --rc geninfo_all_blocks=1 00:14:53.333 --rc geninfo_unexecuted_blocks=1 00:14:53.333 00:14:53.333 ' 00:14:53.333 05:12:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.333 --rc genhtml_branch_coverage=1 00:14:53.333 --rc genhtml_function_coverage=1 00:14:53.333 --rc genhtml_legend=1 00:14:53.333 --rc geninfo_all_blocks=1 00:14:53.333 --rc geninfo_unexecuted_blocks=1 00:14:53.333 00:14:53.333 ' 00:14:53.333 05:12:50 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.333 05:12:50 -- nvmf/common.sh@7 -- # uname -s 00:14:53.333 05:12:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.333 05:12:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.333 05:12:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.333 05:12:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.333 05:12:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.333 05:12:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.333 05:12:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.333 05:12:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.333 05:12:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.333 05:12:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.333 05:12:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:53.333 05:12:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:53.333 05:12:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.333 05:12:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.333 05:12:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:53.333 05:12:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:53.333 05:12:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.333 05:12:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.333 05:12:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.333 05:12:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.333 05:12:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.333 05:12:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.333 05:12:50 -- paths/export.sh@5 -- # export PATH 00:14:53.333 05:12:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.333 05:12:50 -- nvmf/common.sh@46 -- # : 0 00:14:53.333 05:12:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:53.333 05:12:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:53.333 05:12:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:53.333 05:12:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.333 05:12:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.333 05:12:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:53.333 05:12:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:53.333 05:12:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:53.333 05:12:50 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:53.333 05:12:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:53.333 05:12:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.333 05:12:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:53.333 05:12:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:53.333 05:12:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:53.333 05:12:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.333 05:12:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.333 05:12:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.333 05:12:50 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:53.333 05:12:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:53.333 05:12:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:53.333 05:12:50 -- common/autotest_common.sh@10 -- # set +x 00:14:59.911 05:12:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:59.911 05:12:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:59.911 05:12:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:59.911 05:12:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:59.911 05:12:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:59.911 05:12:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:59.911 05:12:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:59.911 05:12:55 -- nvmf/common.sh@294 -- # net_devs=() 00:14:59.911 05:12:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:59.911 05:12:55 -- nvmf/common.sh@295 -- # e810=() 00:14:59.911 05:12:55 -- nvmf/common.sh@295 -- # local -ga e810 00:14:59.911 05:12:55 -- nvmf/common.sh@296 -- # x722=() 00:14:59.911 05:12:55 -- nvmf/common.sh@296 -- # local -ga x722 00:14:59.911 05:12:55 -- nvmf/common.sh@297 -- # mlx=() 00:14:59.911 05:12:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:59.911 05:12:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.911 05:12:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:59.911 05:12:55 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:59.911 05:12:55 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:59.911 05:12:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:59.911 05:12:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:59.911 05:12:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:59.911 05:12:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:59.911 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:59.911 05:12:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:59.911 05:12:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:59.911 05:12:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:59.911 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:59.911 05:12:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:59.911 05:12:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:59.911 05:12:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:59.911 05:12:55 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:14:59.911 05:12:55 -- nvmf/common.sh@376 -- # modinfo irdma 00:14:59.911 05:12:55 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:14:59.911 05:12:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:59.911 05:12:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.911 05:12:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:59.911 05:12:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.912 05:12:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:59.912 Found net devices under 0000:af:00.0: cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.912 05:12:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.912 05:12:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:59.912 05:12:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.912 05:12:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:59.912 Found net devices under 0000:af:00.1: cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.912 05:12:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:59.912 05:12:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:59.912 05:12:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:59.912 05:12:55 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:59.912 05:12:55 -- nvmf/common.sh@57 -- # uname 00:14:59.912 05:12:55 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:59.912 05:12:55 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:59.912 05:12:55 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:59.912 05:12:55 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:59.912 05:12:55 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:59.912 05:12:55 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:59.912 05:12:55 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:59.912 05:12:55 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:59.912 05:12:55 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:59.912 05:12:55 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:59.912 05:12:55 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:59.912 05:12:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:59.912 05:12:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:59.912 05:12:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:59.912 05:12:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:59.912 05:12:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:59.912 05:12:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@104 -- # continue 2 00:14:59.912 05:12:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@104 -- # continue 2 00:14:59.912 05:12:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:59.912 05:12:55 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:59.912 05:12:55 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:59.912 05:12:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:14:59.912 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:59.912 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:59.912 altname enp175s0f0np0 00:14:59.912 altname ens801f0np0 00:14:59.912 inet 192.168.100.8/24 scope global cvl_0_0 00:14:59.912 valid_lft forever preferred_lft forever 00:14:59.912 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:59.912 valid_lft forever preferred_lft forever 00:14:59.912 05:12:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:59.912 05:12:55 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:59.912 05:12:55 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:59.912 05:12:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:14:59.912 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:59.912 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:59.912 altname enp175s0f1np1 00:14:59.912 altname ens801f1np1 00:14:59.912 inet 192.168.100.9/24 scope global cvl_0_1 00:14:59.912 valid_lft forever preferred_lft forever 00:14:59.912 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:59.912 valid_lft forever preferred_lft forever 00:14:59.912 05:12:55 -- nvmf/common.sh@410 -- # return 0 00:14:59.912 05:12:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:59.912 05:12:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:59.912 05:12:55 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:59.912 05:12:55 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:59.912 05:12:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:59.912 05:12:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:59.912 05:12:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:59.912 05:12:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:59.912 05:12:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:59.912 05:12:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@104 -- # continue 2 00:14:59.912 05:12:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:59.912 05:12:55 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:59.912 05:12:55 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@104 -- # continue 2 00:14:59.912 05:12:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:59.912 05:12:55 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:59.912 05:12:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:59.912 05:12:55 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:59.912 05:12:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:59.912 05:12:55 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:59.912 192.168.100.9' 00:14:59.912 05:12:55 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:59.912 192.168.100.9' 00:14:59.912 05:12:55 -- nvmf/common.sh@445 -- # head -n 1 00:14:59.912 05:12:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:59.912 05:12:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:59.912 192.168.100.9' 00:14:59.912 05:12:55 -- nvmf/common.sh@446 -- # head -n 1 00:14:59.912 05:12:55 -- nvmf/common.sh@446 -- # tail -n +2 00:14:59.912 05:12:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:59.912 05:12:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:59.912 05:12:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:59.912 05:12:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:59.912 05:12:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:59.912 05:12:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:59.912 05:12:55 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:59.912 05:12:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:59.912 05:12:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.912 05:12:55 -- common/autotest_common.sh@10 -- # set +x 00:14:59.912 05:12:55 -- nvmf/common.sh@469 -- # nvmfpid=238576 00:14:59.912 05:12:55 -- nvmf/common.sh@470 -- # waitforlisten 238576 00:14:59.912 05:12:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:59.912 05:12:55 -- common/autotest_common.sh@829 -- # '[' -z 238576 ']' 00:14:59.912 05:12:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.912 05:12:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.912 05:12:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.912 05:12:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.912 05:12:55 -- common/autotest_common.sh@10 -- # set +x 00:14:59.912 [2024-11-20 05:12:55.900481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:59.912 [2024-11-20 05:12:55.900528] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.912 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.912 [2024-11-20 05:12:55.960437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:59.912 [2024-11-20 05:12:56.033600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:59.912 [2024-11-20 05:12:56.033708] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.912 [2024-11-20 05:12:56.033716] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.912 [2024-11-20 05:12:56.033723] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.912 [2024-11-20 05:12:56.033764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.913 [2024-11-20 05:12:56.033767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.913 05:12:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.913 05:12:56 -- common/autotest_common.sh@862 -- # return 0 00:14:59.913 05:12:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:59.913 05:12:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.913 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:00.172 05:12:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:00.172 05:12:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.172 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:00.172 [2024-11-20 05:12:56.762015] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1470ab0/0x14700f0) succeed. 00:15:00.172 [2024-11-20 05:12:56.770675] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1471d60/0x1470670) succeed. 00:15:00.172 [2024-11-20 05:12:56.770697] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:00.172 05:12:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:00.172 05:12:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.172 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:00.172 05:12:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:00.172 05:12:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.172 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:00.172 [2024-11-20 05:12:56.786879] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:00.172 05:12:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:00.172 05:12:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.172 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:00.172 NULL1 00:15:00.172 05:12:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:00.172 05:12:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.172 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:00.172 Delay0 00:15:00.172 05:12:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.172 05:12:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.172 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:00.172 05:12:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@28 -- # perf_pid=238823 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:00.172 05:12:56 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:00.172 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.172 [2024-11-20 05:12:56.881227] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:02.078 05:12:58 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.078 05:12:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.078 05:12:58 -- common/autotest_common.sh@10 -- # set +x 00:15:02.647 [2024-11-20 05:12:59.392541] nvme_rdma.c:2483:nvme_rdma_log_wc_status: *ERROR*: WC error, qid 5, qp state 1, request 0x35184374498592 type 1, status: (12): transport retry counter exceeded 00:15:02.647 NVMe io qpair process completion error 00:15:02.647 NVMe io qpair process completion error 00:15:02.647 Read completed with error (sct=0, sc=8) 00:15:02.647 starting I/O failed: -6 00:15:02.647 Read completed with error (sct=0, sc=8) 00:15:02.647 Write completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Write completed with error (sct=0, sc=8) 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:02.648 starting I/O failed: -6 00:15:02.648 Read completed with error (sct=0, sc=8) 00:15:03.218 [2024-11-20 05:12:59.941066] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:03.218 Read completed with error (sct=0, sc=8) 00:15:03.218 Read completed with error (sct=0, sc=8) 00:15:03.218 starting I/O failed: -6 00:15:03.218 Write completed with error (sct=0, sc=8) 00:15:03.218 Write completed with error (sct=0, sc=8) 00:15:03.218 starting I/O failed: -6 00:15:03.218 Read completed with error (sct=0, sc=8) 00:15:03.218 Read completed with error (sct=0, sc=8) 00:15:03.218 starting I/O failed: -6 00:15:03.218 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 starting I/O failed: -6 00:15:03.219 [2024-11-20 05:12:59.941624] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 [2024-11-20 05:12:59.941901] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Write completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.219 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Write completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 Read completed with error (sct=0, sc=8) 00:15:03.220 05:12:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.220 05:12:59 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:03.220 05:12:59 -- target/delete_subsystem.sh@35 -- # kill -0 238823 00:15:03.220 05:12:59 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:03.789 05:13:00 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:03.789 05:13:00 -- target/delete_subsystem.sh@35 -- # kill -0 238823 00:15:03.789 05:13:00 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:04.357 NVMe io qpair process completion error 00:15:04.357 NVMe io qpair process completion error 00:15:04.357 05:13:00 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:04.357 05:13:00 -- target/delete_subsystem.sh@35 -- # kill -0 238823 00:15:04.357 05:13:00 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:04.927 [2024-11-20 05:13:01.477065] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 [2024-11-20 05:13:01.477439] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 [2024-11-20 05:13:01.477693] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 [2024-11-20 05:13:01.477934] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Write completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 Read completed with error (sct=0, sc=8) 00:15:04.927 [2024-11-20 05:13:01.478912] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:04.927 [2024-11-20 05:13:01.491930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:04.927 [2024-11-20 05:13:01.491947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:04.927 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:04.927 05:13:01 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:04.927 05:13:01 -- target/delete_subsystem.sh@35 -- # kill -0 238823 00:15:04.927 05:13:01 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:04.927 Initializing NVMe Controllers 00:15:04.927 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.927 Controller IO queue size 128, less than required. 00:15:04.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.927 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:04.927 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:04.927 Initialization complete. Launching workers. 00:15:04.927 ======================================================== 00:15:04.927 Latency(us) 00:15:04.927 Device Information : IOPS MiB/s Average min max 00:15:04.927 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 140.88 0.07 1320306.97 436169.97 2508682.63 00:15:04.927 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 140.88 0.07 1357740.91 970536.74 2506909.48 00:15:04.927 ======================================================== 00:15:04.927 Total : 281.76 0.14 1339023.94 436169.97 2508682.63 00:15:04.927 00:15:05.187 05:13:01 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:05.187 05:13:01 -- target/delete_subsystem.sh@35 -- # kill -0 238823 00:15:05.187 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (238823) - No such process 00:15:05.187 05:13:01 -- target/delete_subsystem.sh@45 -- # NOT wait 238823 00:15:05.187 05:13:01 -- common/autotest_common.sh@650 -- # local es=0 00:15:05.187 05:13:01 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 238823 00:15:05.187 05:13:01 -- common/autotest_common.sh@638 -- # local arg=wait 00:15:05.187 05:13:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.187 05:13:01 -- common/autotest_common.sh@642 -- # type -t wait 00:15:05.187 05:13:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.187 05:13:02 -- common/autotest_common.sh@653 -- # wait 238823 00:15:05.187 05:13:02 -- common/autotest_common.sh@653 -- # es=1 00:15:05.187 05:13:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:05.187 05:13:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:05.187 05:13:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:05.187 05:13:02 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.187 05:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.187 05:13:02 -- common/autotest_common.sh@10 -- # set +x 00:15:05.187 05:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.187 05:13:02 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:05.447 05:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.447 05:13:02 -- common/autotest_common.sh@10 -- # set +x 00:15:05.447 [2024-11-20 05:13:02.017013] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:05.447 05:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.447 05:13:02 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.447 05:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.447 05:13:02 -- common/autotest_common.sh@10 -- # set +x 00:15:05.447 05:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.447 05:13:02 -- target/delete_subsystem.sh@54 -- # perf_pid=239745 00:15:05.447 05:13:02 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:05.447 05:13:02 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:05.447 05:13:02 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:05.447 05:13:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.447 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.447 [2024-11-20 05:13:02.098203] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:06.016 05:13:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.016 05:13:02 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:06.016 05:13:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.276 05:13:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.276 05:13:03 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:06.276 05:13:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.851 05:13:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.851 05:13:03 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:06.851 05:13:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:07.424 05:13:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.424 05:13:04 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:07.424 05:13:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:07.993 05:13:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.993 05:13:04 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:07.993 05:13:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:08.252 05:13:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:08.252 05:13:05 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:08.252 05:13:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:08.821 05:13:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:08.821 05:13:05 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:08.821 05:13:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:09.390 05:13:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:09.391 05:13:06 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:09.391 05:13:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:09.960 05:13:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:09.960 05:13:06 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:09.960 05:13:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:10.529 05:13:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:10.529 05:13:07 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:10.529 05:13:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:10.789 05:13:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:10.789 05:13:07 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:10.789 05:13:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:11.359 05:13:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:11.359 05:13:08 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:11.359 05:13:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:11.928 05:13:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:11.928 05:13:08 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:11.928 05:13:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.498 05:13:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:12.498 05:13:09 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:12.498 05:13:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.498 Initializing NVMe Controllers 00:15:12.498 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.498 Controller IO queue size 128, less than required. 00:15:12.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.498 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:12.498 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:12.498 Initialization complete. Launching workers. 00:15:12.498 ======================================================== 00:15:12.498 Latency(us) 00:15:12.498 Device Information : IOPS MiB/s Average min max 00:15:12.498 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001561.16 1000056.78 1004334.17 00:15:12.498 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002786.95 1000379.81 1006574.34 00:15:12.498 ======================================================== 00:15:12.498 Total : 256.00 0.12 1002174.06 1000056.78 1006574.34 00:15:12.498 00:15:13.067 05:13:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:13.067 05:13:09 -- target/delete_subsystem.sh@57 -- # kill -0 239745 00:15:13.067 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (239745) - No such process 00:15:13.067 05:13:09 -- target/delete_subsystem.sh@67 -- # wait 239745 00:15:13.067 05:13:09 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:13.067 05:13:09 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:13.067 05:13:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:13.067 05:13:09 -- nvmf/common.sh@116 -- # sync 00:15:13.067 05:13:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:13.067 05:13:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:13.067 05:13:09 -- nvmf/common.sh@119 -- # set +e 00:15:13.067 05:13:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:13.067 05:13:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:13.067 rmmod nvme_rdma 00:15:13.067 rmmod nvme_fabrics 00:15:13.067 05:13:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:13.067 05:13:09 -- nvmf/common.sh@123 -- # set -e 00:15:13.067 05:13:09 -- nvmf/common.sh@124 -- # return 0 00:15:13.067 05:13:09 -- nvmf/common.sh@477 -- # '[' -n 238576 ']' 00:15:13.067 05:13:09 -- nvmf/common.sh@478 -- # killprocess 238576 00:15:13.067 05:13:09 -- common/autotest_common.sh@936 -- # '[' -z 238576 ']' 00:15:13.067 05:13:09 -- common/autotest_common.sh@940 -- # kill -0 238576 00:15:13.067 05:13:09 -- common/autotest_common.sh@941 -- # uname 00:15:13.067 05:13:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:13.067 05:13:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 238576 00:15:13.067 05:13:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:13.067 05:13:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:13.067 05:13:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 238576' 00:15:13.067 killing process with pid 238576 00:15:13.067 05:13:09 -- common/autotest_common.sh@955 -- # kill 238576 00:15:13.067 05:13:09 -- common/autotest_common.sh@960 -- # wait 238576 00:15:13.328 05:13:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:13.328 05:13:09 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:13.328 00:15:13.328 real 0m19.996s 00:15:13.328 user 0m51.976s 00:15:13.328 sys 0m5.383s 00:15:13.328 05:13:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.328 05:13:09 -- common/autotest_common.sh@10 -- # set +x 00:15:13.328 ************************************ 00:15:13.328 END TEST nvmf_delete_subsystem 00:15:13.328 ************************************ 00:15:13.328 05:13:09 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:13.328 05:13:09 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:15:13.328 05:13:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.328 05:13:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.328 05:13:09 -- common/autotest_common.sh@10 -- # set +x 00:15:13.328 ************************************ 00:15:13.328 START TEST nvmf_nvme_cli 00:15:13.328 ************************************ 00:15:13.328 05:13:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:15:13.329 * Looking for test storage... 00:15:13.329 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:13.329 05:13:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:13.329 05:13:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:13.329 05:13:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:13.329 05:13:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:13.329 05:13:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:13.329 05:13:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:13.329 05:13:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:13.329 05:13:10 -- scripts/common.sh@335 -- # IFS=.-: 00:15:13.329 05:13:10 -- scripts/common.sh@335 -- # read -ra ver1 00:15:13.329 05:13:10 -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.329 05:13:10 -- scripts/common.sh@336 -- # read -ra ver2 00:15:13.329 05:13:10 -- scripts/common.sh@337 -- # local 'op=<' 00:15:13.329 05:13:10 -- scripts/common.sh@339 -- # ver1_l=2 00:15:13.329 05:13:10 -- scripts/common.sh@340 -- # ver2_l=1 00:15:13.329 05:13:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:13.329 05:13:10 -- scripts/common.sh@343 -- # case "$op" in 00:15:13.329 05:13:10 -- scripts/common.sh@344 -- # : 1 00:15:13.329 05:13:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:13.329 05:13:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.329 05:13:10 -- scripts/common.sh@364 -- # decimal 1 00:15:13.329 05:13:10 -- scripts/common.sh@352 -- # local d=1 00:15:13.329 05:13:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.329 05:13:10 -- scripts/common.sh@354 -- # echo 1 00:15:13.329 05:13:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:13.329 05:13:10 -- scripts/common.sh@365 -- # decimal 2 00:15:13.329 05:13:10 -- scripts/common.sh@352 -- # local d=2 00:15:13.329 05:13:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.329 05:13:10 -- scripts/common.sh@354 -- # echo 2 00:15:13.329 05:13:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:13.329 05:13:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:13.329 05:13:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:13.329 05:13:10 -- scripts/common.sh@367 -- # return 0 00:15:13.329 05:13:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.329 05:13:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:13.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.329 --rc genhtml_branch_coverage=1 00:15:13.329 --rc genhtml_function_coverage=1 00:15:13.329 --rc genhtml_legend=1 00:15:13.329 --rc geninfo_all_blocks=1 00:15:13.329 --rc geninfo_unexecuted_blocks=1 00:15:13.329 00:15:13.329 ' 00:15:13.329 05:13:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:13.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.329 --rc genhtml_branch_coverage=1 00:15:13.329 --rc genhtml_function_coverage=1 00:15:13.329 --rc genhtml_legend=1 00:15:13.329 --rc geninfo_all_blocks=1 00:15:13.329 --rc geninfo_unexecuted_blocks=1 00:15:13.329 00:15:13.329 ' 00:15:13.329 05:13:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:13.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.329 --rc genhtml_branch_coverage=1 00:15:13.329 --rc genhtml_function_coverage=1 00:15:13.329 --rc genhtml_legend=1 00:15:13.329 --rc geninfo_all_blocks=1 00:15:13.329 --rc geninfo_unexecuted_blocks=1 00:15:13.329 00:15:13.329 ' 00:15:13.329 05:13:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:13.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.329 --rc genhtml_branch_coverage=1 00:15:13.329 --rc genhtml_function_coverage=1 00:15:13.329 --rc genhtml_legend=1 00:15:13.329 --rc geninfo_all_blocks=1 00:15:13.329 --rc geninfo_unexecuted_blocks=1 00:15:13.329 00:15:13.329 ' 00:15:13.329 05:13:10 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.329 05:13:10 -- nvmf/common.sh@7 -- # uname -s 00:15:13.329 05:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.329 05:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.329 05:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.329 05:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.329 05:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.329 05:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.329 05:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.329 05:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.329 05:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.329 05:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.329 05:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:13.329 05:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:13.329 05:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.329 05:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.329 05:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:13.329 05:13:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:13.329 05:13:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.329 05:13:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.329 05:13:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.329 05:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.329 05:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.329 05:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.329 05:13:10 -- paths/export.sh@5 -- # export PATH 00:15:13.329 05:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.329 05:13:10 -- nvmf/common.sh@46 -- # : 0 00:15:13.329 05:13:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:13.329 05:13:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:13.329 05:13:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:13.329 05:13:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.329 05:13:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.329 05:13:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:13.329 05:13:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:13.329 05:13:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:13.329 05:13:10 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.329 05:13:10 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.329 05:13:10 -- target/nvme_cli.sh@14 -- # devs=() 00:15:13.329 05:13:10 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:13.329 05:13:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:13.329 05:13:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.329 05:13:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:13.329 05:13:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:13.329 05:13:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:13.329 05:13:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.329 05:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.329 05:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.593 05:13:10 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:15:13.593 05:13:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:13.593 05:13:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:13.593 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:15:18.869 05:13:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:18.869 05:13:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:18.869 05:13:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:18.869 05:13:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:18.869 05:13:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:18.869 05:13:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:18.869 05:13:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:18.869 05:13:15 -- nvmf/common.sh@294 -- # net_devs=() 00:15:18.869 05:13:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:18.869 05:13:15 -- nvmf/common.sh@295 -- # e810=() 00:15:18.869 05:13:15 -- nvmf/common.sh@295 -- # local -ga e810 00:15:18.869 05:13:15 -- nvmf/common.sh@296 -- # x722=() 00:15:18.869 05:13:15 -- nvmf/common.sh@296 -- # local -ga x722 00:15:18.869 05:13:15 -- nvmf/common.sh@297 -- # mlx=() 00:15:18.869 05:13:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:18.869 05:13:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.869 05:13:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:18.869 05:13:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:18.869 05:13:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:18.869 05:13:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:18.869 05:13:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:18.869 05:13:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:18.869 05:13:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:18.870 05:13:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:18.870 05:13:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:18.870 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:18.870 05:13:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:18.870 05:13:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:18.870 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:18.870 05:13:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:18.870 05:13:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:18.870 05:13:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:15:18.870 05:13:15 -- nvmf/common.sh@376 -- # modinfo irdma 00:15:18.870 05:13:15 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:15:18.870 05:13:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.870 05:13:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:18.870 05:13:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.870 05:13:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:18.870 Found net devices under 0000:af:00.0: cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.870 05:13:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.870 05:13:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:18.870 05:13:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.870 05:13:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:18.870 Found net devices under 0000:af:00.1: cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.870 05:13:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:18.870 05:13:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:18.870 05:13:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:18.870 05:13:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:18.870 05:13:15 -- nvmf/common.sh@57 -- # uname 00:15:18.870 05:13:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:18.870 05:13:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:18.870 05:13:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:18.870 05:13:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:18.870 05:13:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:18.870 05:13:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:18.870 05:13:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:18.870 05:13:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:18.870 05:13:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:18.870 05:13:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:18.870 05:13:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:18.870 05:13:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:18.870 05:13:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:18.870 05:13:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:18.870 05:13:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:18.870 05:13:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:18.870 05:13:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@104 -- # continue 2 00:15:18.870 05:13:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@104 -- # continue 2 00:15:18.870 05:13:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:18.870 05:13:15 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:18.870 05:13:15 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:18.870 05:13:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:15:18.870 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:18.870 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:18.870 altname enp175s0f0np0 00:15:18.870 altname ens801f0np0 00:15:18.870 inet 192.168.100.8/24 scope global cvl_0_0 00:15:18.870 valid_lft forever preferred_lft forever 00:15:18.870 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:18.870 valid_lft forever preferred_lft forever 00:15:18.870 05:13:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:18.870 05:13:15 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:18.870 05:13:15 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:18.870 05:13:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:15:18.870 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:18.870 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:18.870 altname enp175s0f1np1 00:15:18.870 altname ens801f1np1 00:15:18.870 inet 192.168.100.9/24 scope global cvl_0_1 00:15:18.870 valid_lft forever preferred_lft forever 00:15:18.870 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:18.870 valid_lft forever preferred_lft forever 00:15:18.870 05:13:15 -- nvmf/common.sh@410 -- # return 0 00:15:18.870 05:13:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:18.870 05:13:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:18.870 05:13:15 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:18.870 05:13:15 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:18.870 05:13:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:18.870 05:13:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:18.870 05:13:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:18.870 05:13:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:18.870 05:13:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:18.870 05:13:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@104 -- # continue 2 00:15:18.870 05:13:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.870 05:13:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:18.870 05:13:15 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@104 -- # continue 2 00:15:18.870 05:13:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:18.870 05:13:15 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:18.870 05:13:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:18.870 05:13:15 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:15:18.870 05:13:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:18.870 05:13:15 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:18.870 192.168.100.9' 00:15:18.870 05:13:15 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:18.870 192.168.100.9' 00:15:18.870 05:13:15 -- nvmf/common.sh@445 -- # head -n 1 00:15:18.870 05:13:15 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:18.870 05:13:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:18.870 192.168.100.9' 00:15:18.870 05:13:15 -- nvmf/common.sh@446 -- # tail -n +2 00:15:18.870 05:13:15 -- nvmf/common.sh@446 -- # head -n 1 00:15:18.870 05:13:15 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:18.870 05:13:15 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:18.870 05:13:15 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:18.870 05:13:15 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:18.871 05:13:15 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:18.871 05:13:15 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:18.871 05:13:15 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:18.871 05:13:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:18.871 05:13:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:18.871 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:15:18.871 05:13:15 -- nvmf/common.sh@469 -- # nvmfpid=243971 00:15:18.871 05:13:15 -- nvmf/common.sh@470 -- # waitforlisten 243971 00:15:18.871 05:13:15 -- common/autotest_common.sh@829 -- # '[' -z 243971 ']' 00:15:18.871 05:13:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.871 05:13:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.871 05:13:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.871 05:13:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.871 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:15:18.871 05:13:15 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:18.871 [2024-11-20 05:13:15.454660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:18.871 [2024-11-20 05:13:15.454703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.871 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.871 [2024-11-20 05:13:15.510895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.871 [2024-11-20 05:13:15.588481] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:18.871 [2024-11-20 05:13:15.588586] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.871 [2024-11-20 05:13:15.588595] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.871 [2024-11-20 05:13:15.588602] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.871 [2024-11-20 05:13:15.588635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.871 [2024-11-20 05:13:15.588653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.871 [2024-11-20 05:13:15.588742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.871 [2024-11-20 05:13:15.588743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.811 05:13:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.811 05:13:16 -- common/autotest_common.sh@862 -- # return 0 00:15:19.811 05:13:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:19.811 05:13:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 05:13:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.811 05:13:16 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:19.811 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 [2024-11-20 05:13:16.330341] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x10da100/0x10d9740) succeed. 00:15:19.811 [2024-11-20 05:13:16.339350] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x10db470/0x10d9cc0) succeed. 00:15:19.811 [2024-11-20 05:13:16.339372] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:19.811 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.811 05:13:16 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:19.811 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 Malloc0 00:15:19.811 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.811 05:13:16 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:19.811 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 Malloc1 00:15:19.811 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.811 05:13:16 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:19.811 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.811 05:13:16 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:19.811 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.811 05:13:16 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.811 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.811 05:13:16 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:19.811 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 [2024-11-20 05:13:16.424116] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:19.811 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.811 05:13:16 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:19.811 05:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.811 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:19.811 05:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.811 05:13:16 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:15:19.811 00:15:19.811 Discovery Log Number of Records 2, Generation counter 2 00:15:19.811 =====Discovery Log Entry 0====== 00:15:19.811 trtype: rdma 00:15:19.811 adrfam: ipv4 00:15:19.811 subtype: current discovery subsystem 00:15:19.811 treq: not required 00:15:19.811 portid: 0 00:15:19.811 trsvcid: 4420 00:15:19.811 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:19.811 traddr: 192.168.100.8 00:15:19.811 eflags: explicit discovery connections, duplicate discovery information 00:15:19.811 rdma_prtype: not specified 00:15:19.811 rdma_qptype: connected 00:15:19.811 rdma_cms: rdma-cm 00:15:19.811 rdma_pkey: 0x0000 00:15:19.811 =====Discovery Log Entry 1====== 00:15:19.811 trtype: rdma 00:15:19.811 adrfam: ipv4 00:15:19.811 subtype: nvme subsystem 00:15:19.811 treq: not required 00:15:19.811 portid: 0 00:15:19.811 trsvcid: 4420 00:15:19.811 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:19.811 traddr: 192.168.100.8 00:15:19.811 eflags: none 00:15:19.811 rdma_prtype: not specified 00:15:19.811 rdma_qptype: connected 00:15:19.811 rdma_cms: rdma-cm 00:15:19.811 rdma_pkey: 0x0000 00:15:19.811 05:13:16 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:19.811 05:13:16 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:19.811 05:13:16 -- nvmf/common.sh@510 -- # local dev _ 00:15:19.811 05:13:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:19.811 05:13:16 -- nvmf/common.sh@509 -- # nvme list 00:15:19.811 05:13:16 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:19.811 05:13:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:19.811 05:13:16 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:19.811 05:13:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:19.811 05:13:16 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:15:19.811 05:13:16 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:15:19.811 05:13:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:19.811 05:13:16 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:15:19.811 05:13:16 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:15:19.811 05:13:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:19.811 05:13:16 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:15:19.811 05:13:16 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:20.071 05:13:16 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:20.071 05:13:16 -- common/autotest_common.sh@1187 -- # local i=0 00:15:20.071 05:13:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.071 05:13:16 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:15:20.071 05:13:16 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:15:20.071 05:13:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:22.608 05:13:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:22.608 05:13:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:22.608 05:13:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.608 05:13:18 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:15:22.608 05:13:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.608 05:13:18 -- common/autotest_common.sh@1197 -- # return 0 00:15:22.608 05:13:18 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:22.608 05:13:18 -- nvmf/common.sh@510 -- # local dev _ 00:15:22.608 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.608 05:13:18 -- nvmf/common.sh@509 -- # nvme list 00:15:22.608 05:13:18 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:22.608 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.608 05:13:18 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:22.609 /dev/nvme0n2 00:15:22.609 /dev/nvme1n1 00:15:22.609 /dev/nvme1n2 ]] 00:15:22.609 05:13:18 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:22.609 05:13:18 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:22.609 05:13:18 -- nvmf/common.sh@510 -- # local dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@509 -- # nvme list 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:15:22.609 05:13:18 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:15:22.609 05:13:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:22.609 05:13:18 -- target/nvme_cli.sh@59 -- # nvme_num=4 00:15:22.609 05:13:18 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.177 05:13:19 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:23.177 05:13:19 -- common/autotest_common.sh@1208 -- # local i=0 00:15:23.177 05:13:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:23.177 05:13:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.177 05:13:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:23.177 05:13:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.177 05:13:19 -- common/autotest_common.sh@1220 -- # return 0 00:15:23.177 05:13:19 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:23.177 05:13:19 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.177 05:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.177 05:13:19 -- common/autotest_common.sh@10 -- # set +x 00:15:23.177 05:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.177 05:13:19 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:23.177 05:13:19 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:23.177 05:13:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:23.177 05:13:19 -- nvmf/common.sh@116 -- # sync 00:15:23.177 05:13:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:23.177 05:13:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:23.177 05:13:19 -- nvmf/common.sh@119 -- # set +e 00:15:23.177 05:13:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:23.177 05:13:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:23.177 rmmod nvme_rdma 00:15:23.177 rmmod nvme_fabrics 00:15:23.177 05:13:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:23.177 05:13:19 -- nvmf/common.sh@123 -- # set -e 00:15:23.177 05:13:19 -- nvmf/common.sh@124 -- # return 0 00:15:23.177 05:13:19 -- nvmf/common.sh@477 -- # '[' -n 243971 ']' 00:15:23.177 05:13:19 -- nvmf/common.sh@478 -- # killprocess 243971 00:15:23.177 05:13:19 -- common/autotest_common.sh@936 -- # '[' -z 243971 ']' 00:15:23.177 05:13:19 -- common/autotest_common.sh@940 -- # kill -0 243971 00:15:23.177 05:13:19 -- common/autotest_common.sh@941 -- # uname 00:15:23.177 05:13:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.177 05:13:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 243971 00:15:23.177 05:13:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:23.177 05:13:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:23.177 05:13:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 243971' 00:15:23.177 killing process with pid 243971 00:15:23.177 05:13:19 -- common/autotest_common.sh@955 -- # kill 243971 00:15:23.177 05:13:19 -- common/autotest_common.sh@960 -- # wait 243971 00:15:23.436 05:13:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:23.436 05:13:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:23.436 00:15:23.436 real 0m10.287s 00:15:23.436 user 0m20.191s 00:15:23.436 sys 0m4.543s 00:15:23.436 05:13:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:23.436 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:15:23.436 ************************************ 00:15:23.436 END TEST nvmf_nvme_cli 00:15:23.436 ************************************ 00:15:23.695 05:13:20 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:15:23.695 05:13:20 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:15:23.695 05:13:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:23.695 05:13:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.695 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:15:23.695 ************************************ 00:15:23.695 START TEST nvmf_host_management 00:15:23.695 ************************************ 00:15:23.695 05:13:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:15:23.695 * Looking for test storage... 00:15:23.695 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:23.695 05:13:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:23.695 05:13:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:23.695 05:13:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:23.695 05:13:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:23.695 05:13:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:23.695 05:13:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:23.695 05:13:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:23.695 05:13:20 -- scripts/common.sh@335 -- # IFS=.-: 00:15:23.695 05:13:20 -- scripts/common.sh@335 -- # read -ra ver1 00:15:23.695 05:13:20 -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.695 05:13:20 -- scripts/common.sh@336 -- # read -ra ver2 00:15:23.695 05:13:20 -- scripts/common.sh@337 -- # local 'op=<' 00:15:23.695 05:13:20 -- scripts/common.sh@339 -- # ver1_l=2 00:15:23.695 05:13:20 -- scripts/common.sh@340 -- # ver2_l=1 00:15:23.695 05:13:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:23.695 05:13:20 -- scripts/common.sh@343 -- # case "$op" in 00:15:23.695 05:13:20 -- scripts/common.sh@344 -- # : 1 00:15:23.695 05:13:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:23.695 05:13:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.695 05:13:20 -- scripts/common.sh@364 -- # decimal 1 00:15:23.695 05:13:20 -- scripts/common.sh@352 -- # local d=1 00:15:23.695 05:13:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.695 05:13:20 -- scripts/common.sh@354 -- # echo 1 00:15:23.695 05:13:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:23.695 05:13:20 -- scripts/common.sh@365 -- # decimal 2 00:15:23.695 05:13:20 -- scripts/common.sh@352 -- # local d=2 00:15:23.695 05:13:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.695 05:13:20 -- scripts/common.sh@354 -- # echo 2 00:15:23.695 05:13:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:23.695 05:13:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:23.695 05:13:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:23.695 05:13:20 -- scripts/common.sh@367 -- # return 0 00:15:23.695 05:13:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.695 05:13:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:23.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.695 --rc genhtml_branch_coverage=1 00:15:23.695 --rc genhtml_function_coverage=1 00:15:23.695 --rc genhtml_legend=1 00:15:23.695 --rc geninfo_all_blocks=1 00:15:23.695 --rc geninfo_unexecuted_blocks=1 00:15:23.695 00:15:23.695 ' 00:15:23.695 05:13:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:23.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.695 --rc genhtml_branch_coverage=1 00:15:23.695 --rc genhtml_function_coverage=1 00:15:23.695 --rc genhtml_legend=1 00:15:23.695 --rc geninfo_all_blocks=1 00:15:23.695 --rc geninfo_unexecuted_blocks=1 00:15:23.695 00:15:23.695 ' 00:15:23.695 05:13:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:23.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.695 --rc genhtml_branch_coverage=1 00:15:23.695 --rc genhtml_function_coverage=1 00:15:23.695 --rc genhtml_legend=1 00:15:23.695 --rc geninfo_all_blocks=1 00:15:23.695 --rc geninfo_unexecuted_blocks=1 00:15:23.695 00:15:23.695 ' 00:15:23.695 05:13:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:23.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.695 --rc genhtml_branch_coverage=1 00:15:23.695 --rc genhtml_function_coverage=1 00:15:23.695 --rc genhtml_legend=1 00:15:23.695 --rc geninfo_all_blocks=1 00:15:23.695 --rc geninfo_unexecuted_blocks=1 00:15:23.695 00:15:23.695 ' 00:15:23.695 05:13:20 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.695 05:13:20 -- nvmf/common.sh@7 -- # uname -s 00:15:23.695 05:13:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.695 05:13:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.695 05:13:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.695 05:13:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.695 05:13:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.695 05:13:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.696 05:13:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.696 05:13:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.696 05:13:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.696 05:13:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.696 05:13:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:23.696 05:13:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:23.696 05:13:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.696 05:13:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.696 05:13:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:23.696 05:13:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:23.696 05:13:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.696 05:13:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.696 05:13:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.696 05:13:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.696 05:13:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.696 05:13:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.696 05:13:20 -- paths/export.sh@5 -- # export PATH 00:15:23.696 05:13:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.696 05:13:20 -- nvmf/common.sh@46 -- # : 0 00:15:23.696 05:13:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:23.696 05:13:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:23.696 05:13:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:23.696 05:13:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.696 05:13:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.696 05:13:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:23.696 05:13:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:23.696 05:13:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:23.696 05:13:20 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:23.696 05:13:20 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:23.696 05:13:20 -- target/host_management.sh@104 -- # nvmftestinit 00:15:23.696 05:13:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:23.696 05:13:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.696 05:13:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:23.696 05:13:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:23.696 05:13:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:23.696 05:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.696 05:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.696 05:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.696 05:13:20 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:15:23.696 05:13:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:23.696 05:13:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:23.696 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:15:28.977 05:13:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:28.977 05:13:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:28.977 05:13:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:28.977 05:13:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:28.977 05:13:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:28.977 05:13:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:28.977 05:13:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:28.977 05:13:25 -- nvmf/common.sh@294 -- # net_devs=() 00:15:28.977 05:13:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:28.977 05:13:25 -- nvmf/common.sh@295 -- # e810=() 00:15:28.977 05:13:25 -- nvmf/common.sh@295 -- # local -ga e810 00:15:28.977 05:13:25 -- nvmf/common.sh@296 -- # x722=() 00:15:28.977 05:13:25 -- nvmf/common.sh@296 -- # local -ga x722 00:15:28.977 05:13:25 -- nvmf/common.sh@297 -- # mlx=() 00:15:28.977 05:13:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:28.977 05:13:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.977 05:13:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:28.977 05:13:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:28.977 05:13:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:28.977 05:13:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:28.977 05:13:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:28.977 05:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:28.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:28.977 05:13:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:28.977 05:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:28.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:28.977 05:13:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:28.977 05:13:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:28.977 05:13:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:15:28.977 05:13:25 -- nvmf/common.sh@376 -- # modinfo irdma 00:15:28.977 05:13:25 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:15:28.977 05:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.977 05:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:28.977 05:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.977 05:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:28.977 Found net devices under 0000:af:00.0: cvl_0_0 00:15:28.977 05:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.977 05:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.977 05:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:28.977 05:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.977 05:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:28.977 Found net devices under 0000:af:00.1: cvl_0_1 00:15:28.977 05:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.977 05:13:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:28.977 05:13:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:28.977 05:13:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:28.977 05:13:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:28.977 05:13:25 -- nvmf/common.sh@57 -- # uname 00:15:28.977 05:13:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:28.977 05:13:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:28.977 05:13:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:28.977 05:13:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:28.977 05:13:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:28.977 05:13:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:28.977 05:13:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:28.977 05:13:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:28.977 05:13:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:28.977 05:13:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:28.977 05:13:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:28.977 05:13:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:28.977 05:13:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:28.977 05:13:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:28.977 05:13:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:28.977 05:13:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:28.977 05:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:15:28.977 05:13:25 -- nvmf/common.sh@104 -- # continue 2 00:15:28.977 05:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:28.977 05:13:25 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:15:28.977 05:13:25 -- nvmf/common.sh@104 -- # continue 2 00:15:28.977 05:13:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:28.977 05:13:25 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:15:28.977 05:13:25 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:15:28.977 05:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:15:28.977 05:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:28.977 05:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:28.977 05:13:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:28.977 05:13:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:28.977 05:13:25 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:15:28.977 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:28.977 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:28.977 altname enp175s0f0np0 00:15:28.978 altname ens801f0np0 00:15:28.978 inet 192.168.100.8/24 scope global cvl_0_0 00:15:28.978 valid_lft forever preferred_lft forever 00:15:28.978 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:28.978 valid_lft forever preferred_lft forever 00:15:28.978 05:13:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:28.978 05:13:25 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:15:28.978 05:13:25 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:28.978 05:13:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:28.978 05:13:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:28.978 05:13:25 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:15:28.978 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:28.978 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:28.978 altname enp175s0f1np1 00:15:28.978 altname ens801f1np1 00:15:28.978 inet 192.168.100.9/24 scope global cvl_0_1 00:15:28.978 valid_lft forever preferred_lft forever 00:15:28.978 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:28.978 valid_lft forever preferred_lft forever 00:15:28.978 05:13:25 -- nvmf/common.sh@410 -- # return 0 00:15:28.978 05:13:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:28.978 05:13:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:28.978 05:13:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:28.978 05:13:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:28.978 05:13:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:28.978 05:13:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:28.978 05:13:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:28.978 05:13:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:28.978 05:13:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:28.978 05:13:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:28.978 05:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:28.978 05:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:28.978 05:13:25 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:28.978 05:13:25 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:15:28.978 05:13:25 -- nvmf/common.sh@104 -- # continue 2 00:15:28.978 05:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:28.978 05:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:28.978 05:13:25 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:28.978 05:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:28.978 05:13:25 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:28.978 05:13:25 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:15:28.978 05:13:25 -- nvmf/common.sh@104 -- # continue 2 00:15:28.978 05:13:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:28.978 05:13:25 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:15:28.978 05:13:25 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:28.978 05:13:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:28.978 05:13:25 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:15:28.978 05:13:25 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:28.978 05:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:28.978 05:13:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:28.978 192.168.100.9' 00:15:28.978 05:13:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:28.978 192.168.100.9' 00:15:28.978 05:13:25 -- nvmf/common.sh@445 -- # head -n 1 00:15:28.978 05:13:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:28.978 05:13:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:28.978 192.168.100.9' 00:15:28.978 05:13:25 -- nvmf/common.sh@446 -- # tail -n +2 00:15:28.978 05:13:25 -- nvmf/common.sh@446 -- # head -n 1 00:15:28.978 05:13:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:28.978 05:13:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:28.978 05:13:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:28.978 05:13:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:28.978 05:13:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:28.978 05:13:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:28.978 05:13:25 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:15:28.978 05:13:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:28.978 05:13:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.978 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.978 ************************************ 00:15:28.978 START TEST nvmf_host_management 00:15:28.978 ************************************ 00:15:28.978 05:13:25 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:15:28.978 05:13:25 -- target/host_management.sh@69 -- # starttarget 00:15:28.978 05:13:25 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:28.978 05:13:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:28.978 05:13:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.978 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.978 05:13:25 -- nvmf/common.sh@469 -- # nvmfpid=247985 00:15:28.978 05:13:25 -- nvmf/common.sh@470 -- # waitforlisten 247985 00:15:28.978 05:13:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:28.978 05:13:25 -- common/autotest_common.sh@829 -- # '[' -z 247985 ']' 00:15:28.978 05:13:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.978 05:13:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.978 05:13:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.978 05:13:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.978 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:15:29.238 [2024-11-20 05:13:25.811207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:29.238 [2024-11-20 05:13:25.811255] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.238 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.238 [2024-11-20 05:13:25.869156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.238 [2024-11-20 05:13:25.940147] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:29.238 [2024-11-20 05:13:25.940256] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.238 [2024-11-20 05:13:25.940264] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.238 [2024-11-20 05:13:25.940270] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.238 [2024-11-20 05:13:25.940375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.238 [2024-11-20 05:13:25.940443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.238 [2024-11-20 05:13:25.940533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.238 [2024-11-20 05:13:25.940534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:29.808 05:13:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.808 05:13:26 -- common/autotest_common.sh@862 -- # return 0 00:15:29.808 05:13:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:29.808 05:13:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.808 05:13:26 -- common/autotest_common.sh@10 -- # set +x 00:15:30.068 05:13:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.068 05:13:26 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:30.068 05:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.068 05:13:26 -- common/autotest_common.sh@10 -- # set +x 00:15:30.068 [2024-11-20 05:13:26.681282] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x62b3f0/0x62aa30) succeed. 00:15:30.068 [2024-11-20 05:13:26.690186] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x62c760/0x62afb0) succeed. 00:15:30.068 [2024-11-20 05:13:26.690209] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:30.068 05:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.068 05:13:26 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:30.068 05:13:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.068 05:13:26 -- common/autotest_common.sh@10 -- # set +x 00:15:30.068 05:13:26 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:30.068 05:13:26 -- target/host_management.sh@23 -- # cat 00:15:30.068 05:13:26 -- target/host_management.sh@30 -- # rpc_cmd 00:15:30.068 05:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.068 05:13:26 -- common/autotest_common.sh@10 -- # set +x 00:15:30.068 Malloc0 00:15:30.068 [2024-11-20 05:13:26.753348] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:30.068 05:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.068 05:13:26 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:30.068 05:13:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.068 05:13:26 -- common/autotest_common.sh@10 -- # set +x 00:15:30.068 05:13:26 -- target/host_management.sh@73 -- # perfpid=248067 00:15:30.068 05:13:26 -- target/host_management.sh@74 -- # waitforlisten 248067 /var/tmp/bdevperf.sock 00:15:30.068 05:13:26 -- common/autotest_common.sh@829 -- # '[' -z 248067 ']' 00:15:30.068 05:13:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.068 05:13:26 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:30.068 05:13:26 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:30.068 05:13:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.068 05:13:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.068 05:13:26 -- nvmf/common.sh@520 -- # config=() 00:15:30.068 05:13:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.068 05:13:26 -- nvmf/common.sh@520 -- # local subsystem config 00:15:30.068 05:13:26 -- common/autotest_common.sh@10 -- # set +x 00:15:30.068 05:13:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:30.068 05:13:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:30.068 { 00:15:30.068 "params": { 00:15:30.068 "name": "Nvme$subsystem", 00:15:30.068 "trtype": "$TEST_TRANSPORT", 00:15:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.068 "adrfam": "ipv4", 00:15:30.068 "trsvcid": "$NVMF_PORT", 00:15:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.068 "hdgst": ${hdgst:-false}, 00:15:30.068 "ddgst": ${ddgst:-false} 00:15:30.068 }, 00:15:30.068 "method": "bdev_nvme_attach_controller" 00:15:30.068 } 00:15:30.068 EOF 00:15:30.068 )") 00:15:30.068 05:13:26 -- nvmf/common.sh@542 -- # cat 00:15:30.068 05:13:26 -- nvmf/common.sh@544 -- # jq . 00:15:30.068 05:13:26 -- nvmf/common.sh@545 -- # IFS=, 00:15:30.068 05:13:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:30.068 "params": { 00:15:30.068 "name": "Nvme0", 00:15:30.068 "trtype": "rdma", 00:15:30.068 "traddr": "192.168.100.8", 00:15:30.068 "adrfam": "ipv4", 00:15:30.068 "trsvcid": "4420", 00:15:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:30.068 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:30.068 "hdgst": false, 00:15:30.068 "ddgst": false 00:15:30.068 }, 00:15:30.068 "method": "bdev_nvme_attach_controller" 00:15:30.068 }' 00:15:30.068 [2024-11-20 05:13:26.840989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:30.068 [2024-11-20 05:13:26.841035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248067 ] 00:15:30.068 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.328 [2024-11-20 05:13:26.900239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.328 [2024-11-20 05:13:26.970138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.328 Running I/O for 10 seconds... 00:15:30.898 05:13:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.898 05:13:27 -- common/autotest_common.sh@862 -- # return 0 00:15:30.898 05:13:27 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:30.898 05:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.898 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:15:30.898 05:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.898 05:13:27 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.898 05:13:27 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:30.898 05:13:27 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:30.898 05:13:27 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:30.898 05:13:27 -- target/host_management.sh@52 -- # local ret=1 00:15:30.898 05:13:27 -- target/host_management.sh@53 -- # local i 00:15:30.898 05:13:27 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:30.898 05:13:27 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:30.898 05:13:27 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:30.898 05:13:27 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:30.898 05:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.898 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:15:30.898 05:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.157 05:13:27 -- target/host_management.sh@55 -- # read_io_count=3259 00:15:31.157 05:13:27 -- target/host_management.sh@58 -- # '[' 3259 -ge 100 ']' 00:15:31.158 05:13:27 -- target/host_management.sh@59 -- # ret=0 00:15:31.158 05:13:27 -- target/host_management.sh@60 -- # break 00:15:31.158 05:13:27 -- target/host_management.sh@64 -- # return 0 00:15:31.158 05:13:27 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:31.158 05:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.158 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:15:31.158 05:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.158 05:13:27 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:31.158 05:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.158 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:15:31.158 05:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.158 05:13:27 -- target/host_management.sh@87 -- # sleep 1 00:15:31.729 [2024-11-20 05:13:28.293069] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:31.729 [2024-11-20 05:13:28.293106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x8bb3fe68 00:15:31.729 [2024-11-20 05:13:28.293117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.729 [2024-11-20 05:13:28.293132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x4c3ff8da 00:15:31.729 [2024-11-20 05:13:28.293140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.729 [2024-11-20 05:13:28.293149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x4c3ff8da 00:15:31.729 [2024-11-20 05:13:28.293156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.729 [2024-11-20 05:13:28.293164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x8bb3fe68 00:15:31.729 [2024-11-20 05:13:28.293172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.729 [2024-11-20 05:13:28.293180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x94150a7a 00:15:31.729 [2024-11-20 05:13:28.293187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.729 [2024-11-20 05:13:28.293195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x94150a7a 00:15:31.729 [2024-11-20 05:13:28.293201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.729 [2024-11-20 05:13:28.293210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x8bb3fe68 00:15:31.729 [2024-11-20 05:13:28.293218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x49c114 00:15:31.730 [2024-11-20 05:13:28.293238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x1842a18b 00:15:31.730 [2024-11-20 05:13:28.293253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x4c3ff8da 00:15:31.730 [2024-11-20 05:13:28.293268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x49c114 00:15:31.730 [2024-11-20 05:13:28.293284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x1842a18b 00:15:31.730 [2024-11-20 05:13:28.293298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x1842a18b 00:15:31.730 [2024-11-20 05:13:28.293318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x4c3ff8da 00:15:31.730 [2024-11-20 05:13:28.293334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x8bb3fe68 00:15:31.730 [2024-11-20 05:13:28.293349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x4c3ff8da 00:15:31.730 [2024-11-20 05:13:28.293363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4c2000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8a0000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc9f000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc7e000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc5d000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd44000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd23000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd02000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cce1000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ccc0000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cff9000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfd8000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfb7000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf96000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf75000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf54000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf33000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf12000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef1000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced0000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x2a66e9e5 00:15:31.730 [2024-11-20 05:13:28.293736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x4c3ff8da 00:15:31.730 [2024-11-20 05:13:28.293750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x49c114 00:15:31.730 [2024-11-20 05:13:28.293766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.730 [2024-11-20 05:13:28.293774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x8bb3fe68 00:15:31.731 [2024-11-20 05:13:28.293781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x8bb3fe68 00:15:31.731 [2024-11-20 05:13:28.293796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x49c114 00:15:31.731 [2024-11-20 05:13:28.293811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x4c3ff8da 00:15:31.731 [2024-11-20 05:13:28.293826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x94150a7a 00:15:31.731 [2024-11-20 05:13:28.293841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x49c114 00:15:31.731 [2024-11-20 05:13:28.293855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x1842a18b 00:15:31.731 [2024-11-20 05:13:28.293869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x94150a7a 00:15:31.731 [2024-11-20 05:13:28.293883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x8bb3fe68 00:15:31.731 [2024-11-20 05:13:28.293898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x49c114 00:15:31.731 [2024-11-20 05:13:28.293916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x94150a7a 00:15:31.731 [2024-11-20 05:13:28.293931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x8bb3fe68 00:15:31.731 [2024-11-20 05:13:28.293946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x49c114 00:15:31.731 [2024-11-20 05:13:28.293961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x1842a18b 00:15:31.731 [2024-11-20 05:13:28.293975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x94150a7a 00:15:31.731 [2024-11-20 05:13:28.293989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.293996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x49c114 00:15:31.731 [2024-11-20 05:13:28.294003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.294013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x4c3ff8da 00:15:31.731 [2024-11-20 05:13:28.294020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.294028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x94150a7a 00:15:31.731 [2024-11-20 05:13:28.294034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.294042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x94150a7a 00:15:31.731 [2024-11-20 05:13:28.294058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.294069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x8bb3fe68 00:15:31.731 [2024-11-20 05:13:28.294076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.294084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x49c114 00:15:31.731 [2024-11-20 05:13:28.294091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.294101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x94150a7a 00:15:31.731 [2024-11-20 05:13:28.294107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1a0c6f0 sqhd:15c0 p:0 m:0 dnr:0 00:15:31.731 [2024-11-20 05:13:28.294388] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:15:31.731 [2024-11-20 05:13:28.295287] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:31.731 task offset: 53376 on job bdev=Nvme0n1 fails 00:15:31.731 00:15:31.731 Latency(us) 00:15:31.731 [2024-11-20T04:13:28.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.731 [2024-11-20T04:13:28.559Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:31.731 [2024-11-20T04:13:28.559Z] Job: Nvme0n1 ended in about 1.16 seconds with error 00:15:31.731 Verification LBA range: start 0x0 length 0x400 00:15:31.731 Nvme0n1 : 1.16 3010.23 188.14 55.27 0.00 20688.39 2980.33 555245.96 00:15:31.731 [2024-11-20T04:13:28.559Z] =================================================================================================================== 00:15:31.731 [2024-11-20T04:13:28.559Z] Total : 3010.23 188.14 55.27 0.00 20688.39 2980.33 555245.96 00:15:31.731 [2024-11-20 05:13:28.296949] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:31.731 [2024-11-20 05:13:28.296962] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:31.731 [2024-11-20 05:13:28.310346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:31.731 [2024-11-20 05:13:28.326002] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:31.991 05:13:28 -- target/host_management.sh@91 -- # kill -9 248067 00:15:31.991 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (248067) - No such process 00:15:31.991 05:13:28 -- target/host_management.sh@91 -- # true 00:15:31.991 05:13:28 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:31.991 05:13:28 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:31.991 05:13:28 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:31.991 05:13:28 -- nvmf/common.sh@520 -- # config=() 00:15:31.991 05:13:28 -- nvmf/common.sh@520 -- # local subsystem config 00:15:31.991 05:13:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:31.991 05:13:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:31.991 { 00:15:31.991 "params": { 00:15:31.991 "name": "Nvme$subsystem", 00:15:31.991 "trtype": "$TEST_TRANSPORT", 00:15:31.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.991 "adrfam": "ipv4", 00:15:31.991 "trsvcid": "$NVMF_PORT", 00:15:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.991 "hdgst": ${hdgst:-false}, 00:15:31.991 "ddgst": ${ddgst:-false} 00:15:31.991 }, 00:15:31.991 "method": "bdev_nvme_attach_controller" 00:15:31.991 } 00:15:31.991 EOF 00:15:31.991 )") 00:15:31.991 05:13:28 -- nvmf/common.sh@542 -- # cat 00:15:31.991 05:13:28 -- nvmf/common.sh@544 -- # jq . 00:15:31.991 05:13:28 -- nvmf/common.sh@545 -- # IFS=, 00:15:31.991 05:13:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:31.991 "params": { 00:15:31.991 "name": "Nvme0", 00:15:31.991 "trtype": "rdma", 00:15:31.991 "traddr": "192.168.100.8", 00:15:31.991 "adrfam": "ipv4", 00:15:31.991 "trsvcid": "4420", 00:15:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:31.991 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:31.991 "hdgst": false, 00:15:31.991 "ddgst": false 00:15:31.991 }, 00:15:31.991 "method": "bdev_nvme_attach_controller" 00:15:31.991 }' 00:15:31.991 [2024-11-20 05:13:28.811293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:31.991 [2024-11-20 05:13:28.811337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248508 ] 00:15:32.251 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.251 [2024-11-20 05:13:28.867630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.251 [2024-11-20 05:13:28.933374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.521 Running I/O for 1 seconds... 00:15:33.461 00:15:33.461 Latency(us) 00:15:33.461 [2024-11-20T04:13:30.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.461 [2024-11-20T04:13:30.289Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:33.461 Verification LBA range: start 0x0 length 0x400 00:15:33.461 Nvme0n1 : 1.01 5768.77 360.55 0.00 0.00 10915.80 998.64 25839.91 00:15:33.461 [2024-11-20T04:13:30.289Z] =================================================================================================================== 00:15:33.461 [2024-11-20T04:13:30.289Z] Total : 5768.77 360.55 0.00 0.00 10915.80 998.64 25839.91 00:15:33.720 05:13:30 -- target/host_management.sh@101 -- # stoptarget 00:15:33.720 05:13:30 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:33.720 05:13:30 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:33.720 05:13:30 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:33.720 05:13:30 -- target/host_management.sh@40 -- # nvmftestfini 00:15:33.720 05:13:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:33.720 05:13:30 -- nvmf/common.sh@116 -- # sync 00:15:33.720 05:13:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:33.720 05:13:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:33.720 05:13:30 -- nvmf/common.sh@119 -- # set +e 00:15:33.720 05:13:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:33.720 05:13:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:33.720 rmmod nvme_rdma 00:15:33.720 rmmod nvme_fabrics 00:15:33.720 05:13:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:33.720 05:13:30 -- nvmf/common.sh@123 -- # set -e 00:15:33.720 05:13:30 -- nvmf/common.sh@124 -- # return 0 00:15:33.720 05:13:30 -- nvmf/common.sh@477 -- # '[' -n 247985 ']' 00:15:33.720 05:13:30 -- nvmf/common.sh@478 -- # killprocess 247985 00:15:33.720 05:13:30 -- common/autotest_common.sh@936 -- # '[' -z 247985 ']' 00:15:33.720 05:13:30 -- common/autotest_common.sh@940 -- # kill -0 247985 00:15:33.720 05:13:30 -- common/autotest_common.sh@941 -- # uname 00:15:33.720 05:13:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:33.720 05:13:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 247985 00:15:33.720 05:13:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:33.720 05:13:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:33.720 05:13:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 247985' 00:15:33.720 killing process with pid 247985 00:15:33.720 05:13:30 -- common/autotest_common.sh@955 -- # kill 247985 00:15:33.720 05:13:30 -- common/autotest_common.sh@960 -- # wait 247985 00:15:33.980 [2024-11-20 05:13:30.681816] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:33.980 05:13:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:33.980 05:13:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:33.980 00:15:33.980 real 0m4.944s 00:15:33.980 user 0m22.294s 00:15:33.980 sys 0m0.806s 00:15:33.980 05:13:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:33.980 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:33.980 ************************************ 00:15:33.980 END TEST nvmf_host_management 00:15:33.980 ************************************ 00:15:33.980 05:13:30 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:33.980 00:15:33.980 real 0m10.451s 00:15:33.980 user 0m23.991s 00:15:33.980 sys 0m4.736s 00:15:33.980 05:13:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:33.980 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:33.980 ************************************ 00:15:33.980 END TEST nvmf_host_management 00:15:33.980 ************************************ 00:15:33.980 05:13:30 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:15:33.980 05:13:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:33.980 05:13:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:33.980 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:33.980 ************************************ 00:15:33.980 START TEST nvmf_lvol 00:15:33.980 ************************************ 00:15:33.980 05:13:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:15:34.240 * Looking for test storage... 00:15:34.240 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:34.240 05:13:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:34.240 05:13:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:34.240 05:13:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:34.240 05:13:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:34.240 05:13:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:34.240 05:13:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:34.240 05:13:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:34.240 05:13:30 -- scripts/common.sh@335 -- # IFS=.-: 00:15:34.240 05:13:30 -- scripts/common.sh@335 -- # read -ra ver1 00:15:34.240 05:13:30 -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.240 05:13:30 -- scripts/common.sh@336 -- # read -ra ver2 00:15:34.240 05:13:30 -- scripts/common.sh@337 -- # local 'op=<' 00:15:34.240 05:13:30 -- scripts/common.sh@339 -- # ver1_l=2 00:15:34.240 05:13:30 -- scripts/common.sh@340 -- # ver2_l=1 00:15:34.240 05:13:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:34.240 05:13:30 -- scripts/common.sh@343 -- # case "$op" in 00:15:34.240 05:13:30 -- scripts/common.sh@344 -- # : 1 00:15:34.240 05:13:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:34.240 05:13:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.240 05:13:30 -- scripts/common.sh@364 -- # decimal 1 00:15:34.240 05:13:30 -- scripts/common.sh@352 -- # local d=1 00:15:34.240 05:13:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.240 05:13:30 -- scripts/common.sh@354 -- # echo 1 00:15:34.240 05:13:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:34.240 05:13:30 -- scripts/common.sh@365 -- # decimal 2 00:15:34.240 05:13:30 -- scripts/common.sh@352 -- # local d=2 00:15:34.240 05:13:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.240 05:13:30 -- scripts/common.sh@354 -- # echo 2 00:15:34.240 05:13:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:34.240 05:13:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:34.240 05:13:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:34.240 05:13:30 -- scripts/common.sh@367 -- # return 0 00:15:34.240 05:13:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.240 05:13:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:34.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.240 --rc genhtml_branch_coverage=1 00:15:34.240 --rc genhtml_function_coverage=1 00:15:34.240 --rc genhtml_legend=1 00:15:34.240 --rc geninfo_all_blocks=1 00:15:34.240 --rc geninfo_unexecuted_blocks=1 00:15:34.240 00:15:34.240 ' 00:15:34.240 05:13:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:34.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.240 --rc genhtml_branch_coverage=1 00:15:34.240 --rc genhtml_function_coverage=1 00:15:34.240 --rc genhtml_legend=1 00:15:34.240 --rc geninfo_all_blocks=1 00:15:34.240 --rc geninfo_unexecuted_blocks=1 00:15:34.240 00:15:34.240 ' 00:15:34.240 05:13:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:34.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.240 --rc genhtml_branch_coverage=1 00:15:34.240 --rc genhtml_function_coverage=1 00:15:34.240 --rc genhtml_legend=1 00:15:34.240 --rc geninfo_all_blocks=1 00:15:34.240 --rc geninfo_unexecuted_blocks=1 00:15:34.240 00:15:34.240 ' 00:15:34.240 05:13:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:34.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.240 --rc genhtml_branch_coverage=1 00:15:34.240 --rc genhtml_function_coverage=1 00:15:34.240 --rc genhtml_legend=1 00:15:34.240 --rc geninfo_all_blocks=1 00:15:34.240 --rc geninfo_unexecuted_blocks=1 00:15:34.240 00:15:34.240 ' 00:15:34.240 05:13:30 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.240 05:13:30 -- nvmf/common.sh@7 -- # uname -s 00:15:34.240 05:13:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.240 05:13:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.240 05:13:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.240 05:13:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.240 05:13:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.240 05:13:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.240 05:13:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.240 05:13:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.240 05:13:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.240 05:13:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.240 05:13:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:34.240 05:13:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:34.240 05:13:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.240 05:13:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.240 05:13:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:34.240 05:13:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:34.240 05:13:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.240 05:13:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.241 05:13:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.241 05:13:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.241 05:13:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.241 05:13:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.241 05:13:30 -- paths/export.sh@5 -- # export PATH 00:15:34.241 05:13:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.241 05:13:30 -- nvmf/common.sh@46 -- # : 0 00:15:34.241 05:13:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:34.241 05:13:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:34.241 05:13:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:34.241 05:13:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.241 05:13:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.241 05:13:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:34.241 05:13:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:34.241 05:13:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:34.241 05:13:30 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.241 05:13:30 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.241 05:13:30 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:34.241 05:13:30 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:34.241 05:13:30 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:34.241 05:13:30 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:34.241 05:13:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:34.241 05:13:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.241 05:13:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:34.241 05:13:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:34.241 05:13:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:34.241 05:13:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.241 05:13:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.241 05:13:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.241 05:13:30 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:15:34.241 05:13:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:34.241 05:13:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:34.241 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:39.522 05:13:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:39.522 05:13:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:39.522 05:13:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:39.522 05:13:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:39.522 05:13:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:39.522 05:13:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:39.522 05:13:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:39.522 05:13:35 -- nvmf/common.sh@294 -- # net_devs=() 00:15:39.522 05:13:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:39.522 05:13:35 -- nvmf/common.sh@295 -- # e810=() 00:15:39.522 05:13:35 -- nvmf/common.sh@295 -- # local -ga e810 00:15:39.522 05:13:35 -- nvmf/common.sh@296 -- # x722=() 00:15:39.522 05:13:35 -- nvmf/common.sh@296 -- # local -ga x722 00:15:39.522 05:13:35 -- nvmf/common.sh@297 -- # mlx=() 00:15:39.522 05:13:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:39.522 05:13:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.522 05:13:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:39.522 05:13:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:39.522 05:13:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:39.522 05:13:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:39.522 05:13:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:39.522 05:13:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:39.522 05:13:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:39.522 05:13:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:39.522 05:13:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:39.522 05:13:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:39.522 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:39.522 05:13:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:39.522 05:13:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:39.522 05:13:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:39.522 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:39.522 05:13:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.522 05:13:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:39.523 05:13:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:39.523 05:13:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:15:39.523 05:13:36 -- nvmf/common.sh@376 -- # modinfo irdma 00:15:39.523 05:13:36 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:15:39.523 05:13:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.523 05:13:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:39.523 05:13:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.523 05:13:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:39.523 Found net devices under 0000:af:00.0: cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.523 05:13:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.523 05:13:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:39.523 05:13:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.523 05:13:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:39.523 Found net devices under 0000:af:00.1: cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.523 05:13:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:39.523 05:13:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:39.523 05:13:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:39.523 05:13:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:39.523 05:13:36 -- nvmf/common.sh@57 -- # uname 00:15:39.523 05:13:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:39.523 05:13:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:39.523 05:13:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:39.523 05:13:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:39.523 05:13:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:39.523 05:13:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:39.523 05:13:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:39.523 05:13:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:39.523 05:13:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:39.523 05:13:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:39.523 05:13:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:39.523 05:13:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:39.523 05:13:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:39.523 05:13:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:39.523 05:13:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:39.523 05:13:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:39.523 05:13:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@104 -- # continue 2 00:15:39.523 05:13:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@104 -- # continue 2 00:15:39.523 05:13:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:39.523 05:13:36 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:39.523 05:13:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:39.523 05:13:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:15:39.523 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:39.523 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:39.523 altname enp175s0f0np0 00:15:39.523 altname ens801f0np0 00:15:39.523 inet 192.168.100.8/24 scope global cvl_0_0 00:15:39.523 valid_lft forever preferred_lft forever 00:15:39.523 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:39.523 valid_lft forever preferred_lft forever 00:15:39.523 05:13:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:39.523 05:13:36 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:39.523 05:13:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:39.523 05:13:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:15:39.523 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:39.523 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:39.523 altname enp175s0f1np1 00:15:39.523 altname ens801f1np1 00:15:39.523 inet 192.168.100.9/24 scope global cvl_0_1 00:15:39.523 valid_lft forever preferred_lft forever 00:15:39.523 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:39.523 valid_lft forever preferred_lft forever 00:15:39.523 05:13:36 -- nvmf/common.sh@410 -- # return 0 00:15:39.523 05:13:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:39.523 05:13:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:39.523 05:13:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:39.523 05:13:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:39.523 05:13:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:39.523 05:13:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:39.523 05:13:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:39.523 05:13:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:39.523 05:13:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:39.523 05:13:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@104 -- # continue 2 00:15:39.523 05:13:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:39.523 05:13:36 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:39.523 05:13:36 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@104 -- # continue 2 00:15:39.523 05:13:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:39.523 05:13:36 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:39.523 05:13:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:39.523 05:13:36 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:39.523 05:13:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:39.523 05:13:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:39.523 192.168.100.9' 00:15:39.523 05:13:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:39.523 192.168.100.9' 00:15:39.523 05:13:36 -- nvmf/common.sh@445 -- # head -n 1 00:15:39.523 05:13:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:39.523 05:13:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:39.523 192.168.100.9' 00:15:39.523 05:13:36 -- nvmf/common.sh@446 -- # tail -n +2 00:15:39.523 05:13:36 -- nvmf/common.sh@446 -- # head -n 1 00:15:39.523 05:13:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:39.523 05:13:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:39.523 05:13:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:39.523 05:13:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:39.523 05:13:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:39.523 05:13:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:39.523 05:13:36 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:39.523 05:13:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:39.523 05:13:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.523 05:13:36 -- common/autotest_common.sh@10 -- # set +x 00:15:39.524 05:13:36 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:39.524 05:13:36 -- nvmf/common.sh@469 -- # nvmfpid=251821 00:15:39.524 05:13:36 -- nvmf/common.sh@470 -- # waitforlisten 251821 00:15:39.524 05:13:36 -- common/autotest_common.sh@829 -- # '[' -z 251821 ']' 00:15:39.524 05:13:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.524 05:13:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.524 05:13:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.524 05:13:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.524 05:13:36 -- common/autotest_common.sh@10 -- # set +x 00:15:39.524 [2024-11-20 05:13:36.255400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:39.524 [2024-11-20 05:13:36.255440] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.524 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.524 [2024-11-20 05:13:36.314948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.783 [2024-11-20 05:13:36.388974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:39.783 [2024-11-20 05:13:36.389089] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.783 [2024-11-20 05:13:36.389097] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.783 [2024-11-20 05:13:36.389103] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.783 [2024-11-20 05:13:36.389145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.783 [2024-11-20 05:13:36.389163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.783 [2024-11-20 05:13:36.389170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.353 05:13:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.353 05:13:37 -- common/autotest_common.sh@862 -- # return 0 00:15:40.353 05:13:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:40.353 05:13:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.353 05:13:37 -- common/autotest_common.sh@10 -- # set +x 00:15:40.353 05:13:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.353 05:13:37 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:40.613 [2024-11-20 05:13:37.274714] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x21065d0/0x2105c10) succeed. 00:15:40.613 [2024-11-20 05:13:37.283397] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x21078c0/0x2106190) succeed. 00:15:40.613 [2024-11-20 05:13:37.283421] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:40.613 05:13:37 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:40.872 05:13:37 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:40.872 05:13:37 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:40.872 05:13:37 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:40.872 05:13:37 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:41.131 05:13:37 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:41.390 05:13:38 -- target/nvmf_lvol.sh@29 -- # lvs=2954c962-1b10-43c1-a76e-9279d2a07367 00:15:41.390 05:13:38 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2954c962-1b10-43c1-a76e-9279d2a07367 lvol 20 00:15:41.652 05:13:38 -- target/nvmf_lvol.sh@32 -- # lvol=f7d2401d-97e8-4bcc-8587-082521ec9575 00:15:41.652 05:13:38 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:41.652 05:13:38 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7d2401d-97e8-4bcc-8587-082521ec9575 00:15:41.910 05:13:38 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:42.169 [2024-11-20 05:13:38.776192] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:42.169 05:13:38 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:42.169 05:13:38 -- target/nvmf_lvol.sh@42 -- # perf_pid=252311 00:15:42.169 05:13:38 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:42.169 05:13:38 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:42.428 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.367 05:13:39 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f7d2401d-97e8-4bcc-8587-082521ec9575 MY_SNAPSHOT 00:15:43.627 05:13:40 -- target/nvmf_lvol.sh@47 -- # snapshot=63888a7e-2682-4e1c-a476-d1a8843fcc1f 00:15:43.627 05:13:40 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f7d2401d-97e8-4bcc-8587-082521ec9575 30 00:15:43.627 05:13:40 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 63888a7e-2682-4e1c-a476-d1a8843fcc1f MY_CLONE 00:15:43.886 05:13:40 -- target/nvmf_lvol.sh@49 -- # clone=3a724b30-d151-4b1b-b1c4-fad4bf989ef3 00:15:43.886 05:13:40 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3a724b30-d151-4b1b-b1c4-fad4bf989ef3 00:15:44.145 05:13:40 -- target/nvmf_lvol.sh@53 -- # wait 252311 00:15:54.134 Initializing NVMe Controllers 00:15:54.134 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:54.134 Controller IO queue size 128, less than required. 00:15:54.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.134 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:54.134 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:54.134 Initialization complete. Launching workers. 00:15:54.134 ======================================================== 00:15:54.134 Latency(us) 00:15:54.134 Device Information : IOPS MiB/s Average min max 00:15:54.134 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16913.30 66.07 7570.06 2197.16 37511.72 00:15:54.134 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17147.50 66.98 7465.68 3667.54 34628.30 00:15:54.134 ======================================================== 00:15:54.134 Total : 34060.80 133.05 7517.51 2197.16 37511.72 00:15:54.134 00:15:54.134 05:13:50 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:54.134 05:13:50 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f7d2401d-97e8-4bcc-8587-082521ec9575 00:15:54.134 05:13:50 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2954c962-1b10-43c1-a76e-9279d2a07367 00:15:54.393 05:13:50 -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:54.394 05:13:50 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:54.394 05:13:50 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:54.394 05:13:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:54.394 05:13:50 -- nvmf/common.sh@116 -- # sync 00:15:54.394 05:13:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:54.394 05:13:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:54.394 05:13:50 -- nvmf/common.sh@119 -- # set +e 00:15:54.394 05:13:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:54.394 05:13:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:54.394 rmmod nvme_rdma 00:15:54.394 rmmod nvme_fabrics 00:15:54.394 05:13:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:54.394 05:13:51 -- nvmf/common.sh@123 -- # set -e 00:15:54.394 05:13:51 -- nvmf/common.sh@124 -- # return 0 00:15:54.394 05:13:51 -- nvmf/common.sh@477 -- # '[' -n 251821 ']' 00:15:54.394 05:13:51 -- nvmf/common.sh@478 -- # killprocess 251821 00:15:54.394 05:13:51 -- common/autotest_common.sh@936 -- # '[' -z 251821 ']' 00:15:54.394 05:13:51 -- common/autotest_common.sh@940 -- # kill -0 251821 00:15:54.394 05:13:51 -- common/autotest_common.sh@941 -- # uname 00:15:54.394 05:13:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:54.394 05:13:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 251821 00:15:54.394 05:13:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:54.394 05:13:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:54.394 05:13:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 251821' 00:15:54.394 killing process with pid 251821 00:15:54.394 05:13:51 -- common/autotest_common.sh@955 -- # kill 251821 00:15:54.394 05:13:51 -- common/autotest_common.sh@960 -- # wait 251821 00:15:54.654 05:13:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:54.654 05:13:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:54.654 00:15:54.654 real 0m20.565s 00:15:54.654 user 1m11.246s 00:15:54.654 sys 0m4.875s 00:15:54.654 05:13:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:54.654 05:13:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.654 ************************************ 00:15:54.654 END TEST nvmf_lvol 00:15:54.654 ************************************ 00:15:54.654 05:13:51 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:15:54.654 05:13:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:54.654 05:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.654 05:13:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.654 ************************************ 00:15:54.654 START TEST nvmf_lvs_grow 00:15:54.654 ************************************ 00:15:54.654 05:13:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:15:54.654 * Looking for test storage... 00:15:54.654 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:54.654 05:13:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:54.654 05:13:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:54.654 05:13:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:54.914 05:13:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:54.914 05:13:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:54.914 05:13:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:54.914 05:13:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:54.914 05:13:51 -- scripts/common.sh@335 -- # IFS=.-: 00:15:54.914 05:13:51 -- scripts/common.sh@335 -- # read -ra ver1 00:15:54.914 05:13:51 -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.914 05:13:51 -- scripts/common.sh@336 -- # read -ra ver2 00:15:54.914 05:13:51 -- scripts/common.sh@337 -- # local 'op=<' 00:15:54.914 05:13:51 -- scripts/common.sh@339 -- # ver1_l=2 00:15:54.914 05:13:51 -- scripts/common.sh@340 -- # ver2_l=1 00:15:54.914 05:13:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:54.914 05:13:51 -- scripts/common.sh@343 -- # case "$op" in 00:15:54.914 05:13:51 -- scripts/common.sh@344 -- # : 1 00:15:54.914 05:13:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:54.915 05:13:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.915 05:13:51 -- scripts/common.sh@364 -- # decimal 1 00:15:54.915 05:13:51 -- scripts/common.sh@352 -- # local d=1 00:15:54.915 05:13:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.915 05:13:51 -- scripts/common.sh@354 -- # echo 1 00:15:54.915 05:13:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:54.915 05:13:51 -- scripts/common.sh@365 -- # decimal 2 00:15:54.915 05:13:51 -- scripts/common.sh@352 -- # local d=2 00:15:54.915 05:13:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.915 05:13:51 -- scripts/common.sh@354 -- # echo 2 00:15:54.915 05:13:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:54.915 05:13:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:54.915 05:13:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:54.915 05:13:51 -- scripts/common.sh@367 -- # return 0 00:15:54.915 05:13:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.915 05:13:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:54.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.915 --rc genhtml_branch_coverage=1 00:15:54.915 --rc genhtml_function_coverage=1 00:15:54.915 --rc genhtml_legend=1 00:15:54.915 --rc geninfo_all_blocks=1 00:15:54.915 --rc geninfo_unexecuted_blocks=1 00:15:54.915 00:15:54.915 ' 00:15:54.915 05:13:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:54.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.915 --rc genhtml_branch_coverage=1 00:15:54.915 --rc genhtml_function_coverage=1 00:15:54.915 --rc genhtml_legend=1 00:15:54.915 --rc geninfo_all_blocks=1 00:15:54.915 --rc geninfo_unexecuted_blocks=1 00:15:54.915 00:15:54.915 ' 00:15:54.915 05:13:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:54.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.915 --rc genhtml_branch_coverage=1 00:15:54.915 --rc genhtml_function_coverage=1 00:15:54.915 --rc genhtml_legend=1 00:15:54.915 --rc geninfo_all_blocks=1 00:15:54.915 --rc geninfo_unexecuted_blocks=1 00:15:54.915 00:15:54.915 ' 00:15:54.915 05:13:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:54.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.915 --rc genhtml_branch_coverage=1 00:15:54.915 --rc genhtml_function_coverage=1 00:15:54.915 --rc genhtml_legend=1 00:15:54.915 --rc geninfo_all_blocks=1 00:15:54.915 --rc geninfo_unexecuted_blocks=1 00:15:54.915 00:15:54.915 ' 00:15:54.915 05:13:51 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.915 05:13:51 -- nvmf/common.sh@7 -- # uname -s 00:15:54.915 05:13:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.915 05:13:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.915 05:13:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.915 05:13:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.915 05:13:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.915 05:13:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.915 05:13:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.915 05:13:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.915 05:13:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.915 05:13:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.915 05:13:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:54.915 05:13:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:54.915 05:13:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.915 05:13:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.915 05:13:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:54.915 05:13:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:54.915 05:13:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.915 05:13:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.915 05:13:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.915 05:13:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.915 05:13:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.915 05:13:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.915 05:13:51 -- paths/export.sh@5 -- # export PATH 00:15:54.915 05:13:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.915 05:13:51 -- nvmf/common.sh@46 -- # : 0 00:15:54.915 05:13:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:54.915 05:13:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:54.915 05:13:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:54.915 05:13:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.915 05:13:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.915 05:13:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:54.915 05:13:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:54.915 05:13:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:54.915 05:13:51 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:54.915 05:13:51 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:54.915 05:13:51 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:15:54.915 05:13:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:54.915 05:13:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.915 05:13:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:54.915 05:13:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:54.915 05:13:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:54.915 05:13:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.915 05:13:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.915 05:13:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.915 05:13:51 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:15:54.915 05:13:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:54.915 05:13:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:54.915 05:13:51 -- common/autotest_common.sh@10 -- # set +x 00:16:00.194 05:13:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:00.194 05:13:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:00.194 05:13:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:00.194 05:13:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:00.194 05:13:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:00.194 05:13:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:00.194 05:13:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:00.194 05:13:56 -- nvmf/common.sh@294 -- # net_devs=() 00:16:00.194 05:13:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:00.194 05:13:56 -- nvmf/common.sh@295 -- # e810=() 00:16:00.194 05:13:56 -- nvmf/common.sh@295 -- # local -ga e810 00:16:00.194 05:13:56 -- nvmf/common.sh@296 -- # x722=() 00:16:00.194 05:13:56 -- nvmf/common.sh@296 -- # local -ga x722 00:16:00.194 05:13:56 -- nvmf/common.sh@297 -- # mlx=() 00:16:00.194 05:13:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:00.194 05:13:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.194 05:13:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.194 05:13:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.195 05:13:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:00.195 05:13:56 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:00.195 05:13:56 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:00.195 05:13:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:00.195 05:13:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:00.195 05:13:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:00.195 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:00.195 05:13:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:00.195 05:13:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:00.195 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:00.195 05:13:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:00.195 05:13:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:00.195 05:13:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:16:00.195 05:13:56 -- nvmf/common.sh@376 -- # modinfo irdma 00:16:00.195 05:13:56 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:16:00.195 05:13:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.195 05:13:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:00.195 05:13:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.195 05:13:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:00.195 Found net devices under 0000:af:00.0: cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.195 05:13:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.195 05:13:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:00.195 05:13:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.195 05:13:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:00.195 Found net devices under 0000:af:00.1: cvl_0_1 00:16:00.195 05:13:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.195 05:13:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:00.195 05:13:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:00.195 05:13:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:00.195 05:13:56 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:00.195 05:13:56 -- nvmf/common.sh@57 -- # uname 00:16:00.195 05:13:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:00.195 05:13:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:00.195 05:13:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:00.195 05:13:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:00.195 05:13:56 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:00.195 05:13:56 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:00.195 05:13:56 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:00.195 05:13:56 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:00.195 05:13:56 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:00.195 05:13:56 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:00.195 05:13:56 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:00.195 05:13:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:00.195 05:13:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:00.195 05:13:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:00.195 05:13:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:00.195 05:13:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:00.195 05:13:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@104 -- # continue 2 00:16:00.195 05:13:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:16:00.195 05:13:56 -- nvmf/common.sh@104 -- # continue 2 00:16:00.195 05:13:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:00.195 05:13:56 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:00.195 05:13:56 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:00.195 05:13:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:16:00.195 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:00.195 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:00.195 altname enp175s0f0np0 00:16:00.195 altname ens801f0np0 00:16:00.195 inet 192.168.100.8/24 scope global cvl_0_0 00:16:00.195 valid_lft forever preferred_lft forever 00:16:00.195 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:00.195 valid_lft forever preferred_lft forever 00:16:00.195 05:13:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:00.195 05:13:56 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:16:00.195 05:13:56 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:16:00.195 05:13:56 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:00.195 05:13:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:16:00.195 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:00.195 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:00.195 altname enp175s0f1np1 00:16:00.195 altname ens801f1np1 00:16:00.195 inet 192.168.100.9/24 scope global cvl_0_1 00:16:00.195 valid_lft forever preferred_lft forever 00:16:00.195 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:00.195 valid_lft forever preferred_lft forever 00:16:00.195 05:13:56 -- nvmf/common.sh@410 -- # return 0 00:16:00.195 05:13:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:00.195 05:13:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:00.195 05:13:56 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:00.195 05:13:56 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:00.195 05:13:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:00.195 05:13:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:00.195 05:13:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:00.195 05:13:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:00.195 05:13:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:00.195 05:13:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@104 -- # continue 2 00:16:00.195 05:13:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.195 05:13:56 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:00.195 05:13:56 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:16:00.195 05:13:56 -- nvmf/common.sh@104 -- # continue 2 00:16:00.195 05:13:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:00.195 05:13:56 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:00.195 05:13:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:00.196 05:13:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:00.196 05:13:56 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:16:00.196 05:13:56 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:16:00.196 05:13:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:16:00.196 05:13:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:00.196 05:13:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:00.196 05:13:56 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:00.196 192.168.100.9' 00:16:00.196 05:13:56 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:00.196 192.168.100.9' 00:16:00.196 05:13:56 -- nvmf/common.sh@445 -- # head -n 1 00:16:00.196 05:13:56 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:00.196 05:13:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:00.196 192.168.100.9' 00:16:00.196 05:13:56 -- nvmf/common.sh@446 -- # tail -n +2 00:16:00.196 05:13:56 -- nvmf/common.sh@446 -- # head -n 1 00:16:00.196 05:13:56 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:00.196 05:13:56 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:00.196 05:13:56 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:00.196 05:13:56 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:00.196 05:13:56 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:00.196 05:13:56 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:00.196 05:13:56 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:00.196 05:13:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:00.196 05:13:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.196 05:13:56 -- common/autotest_common.sh@10 -- # set +x 00:16:00.196 05:13:56 -- nvmf/common.sh@469 -- # nvmfpid=257430 00:16:00.196 05:13:56 -- nvmf/common.sh@470 -- # waitforlisten 257430 00:16:00.196 05:13:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:00.196 05:13:56 -- common/autotest_common.sh@829 -- # '[' -z 257430 ']' 00:16:00.196 05:13:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.196 05:13:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.196 05:13:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.196 05:13:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.196 05:13:56 -- common/autotest_common.sh@10 -- # set +x 00:16:00.196 [2024-11-20 05:13:56.494671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:00.196 [2024-11-20 05:13:56.494714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.196 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.196 [2024-11-20 05:13:56.547396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.196 [2024-11-20 05:13:56.620338] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.196 [2024-11-20 05:13:56.620447] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.196 [2024-11-20 05:13:56.620456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.196 [2024-11-20 05:13:56.620463] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.196 [2024-11-20 05:13:56.620481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.766 05:13:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.766 05:13:57 -- common/autotest_common.sh@862 -- # return 0 00:16:00.766 05:13:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:00.766 05:13:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.766 05:13:57 -- common/autotest_common.sh@10 -- # set +x 00:16:00.766 05:13:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:00.766 [2024-11-20 05:13:57.524001] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x63cfa0/0x63c5e0) succeed. 00:16:00.766 [2024-11-20 05:13:57.533165] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x63e250/0x63cb60) succeed. 00:16:00.766 [2024-11-20 05:13:57.533188] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:00.766 05:13:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:00.766 05:13:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:00.766 05:13:57 -- common/autotest_common.sh@10 -- # set +x 00:16:00.766 ************************************ 00:16:00.766 START TEST lvs_grow_clean 00:16:00.766 ************************************ 00:16:00.766 05:13:57 -- common/autotest_common.sh@1114 -- # lvs_grow 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:00.766 05:13:57 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:01.026 05:13:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:01.026 05:13:57 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:01.286 05:13:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=264125ea-b58b-4480-8c6b-68bcb96af739 00:16:01.286 05:13:57 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:01.286 05:13:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:01.545 05:13:58 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:01.545 05:13:58 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:01.545 05:13:58 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 264125ea-b58b-4480-8c6b-68bcb96af739 lvol 150 00:16:01.545 05:13:58 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a29348b1-b6db-4f35-b530-f4fcd02c38ac 00:16:01.546 05:13:58 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:01.546 05:13:58 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:01.805 [2024-11-20 05:13:58.454830] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:01.805 [2024-11-20 05:13:58.454881] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:01.805 true 00:16:01.805 05:13:58 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:01.805 05:13:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:02.065 05:13:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:02.065 05:13:58 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:02.065 05:13:58 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a29348b1-b6db-4f35-b530-f4fcd02c38ac 00:16:02.325 05:13:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:02.325 [2024-11-20 05:13:59.140854] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:02.583 05:13:59 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:02.583 05:13:59 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=257938 00:16:02.583 05:13:59 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:02.583 05:13:59 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 257938 /var/tmp/bdevperf.sock 00:16:02.583 05:13:59 -- common/autotest_common.sh@829 -- # '[' -z 257938 ']' 00:16:02.583 05:13:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.583 05:13:59 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:02.583 05:13:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.583 05:13:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.583 05:13:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.583 05:13:59 -- common/autotest_common.sh@10 -- # set +x 00:16:02.583 [2024-11-20 05:13:59.356589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:02.583 [2024-11-20 05:13:59.356636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257938 ] 00:16:02.583 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.841 [2024-11-20 05:13:59.411480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.841 [2024-11-20 05:13:59.485781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.411 05:14:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.411 05:14:00 -- common/autotest_common.sh@862 -- # return 0 00:16:03.411 05:14:00 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:03.670 Nvme0n1 00:16:03.670 05:14:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:03.929 [ 00:16:03.929 { 00:16:03.929 "name": "Nvme0n1", 00:16:03.929 "aliases": [ 00:16:03.929 "a29348b1-b6db-4f35-b530-f4fcd02c38ac" 00:16:03.929 ], 00:16:03.929 "product_name": "NVMe disk", 00:16:03.929 "block_size": 4096, 00:16:03.929 "num_blocks": 38912, 00:16:03.929 "uuid": "a29348b1-b6db-4f35-b530-f4fcd02c38ac", 00:16:03.929 "assigned_rate_limits": { 00:16:03.929 "rw_ios_per_sec": 0, 00:16:03.929 "rw_mbytes_per_sec": 0, 00:16:03.929 "r_mbytes_per_sec": 0, 00:16:03.929 "w_mbytes_per_sec": 0 00:16:03.929 }, 00:16:03.929 "claimed": false, 00:16:03.929 "zoned": false, 00:16:03.929 "supported_io_types": { 00:16:03.929 "read": true, 00:16:03.929 "write": true, 00:16:03.929 "unmap": true, 00:16:03.929 "write_zeroes": true, 00:16:03.929 "flush": true, 00:16:03.929 "reset": true, 00:16:03.929 "compare": true, 00:16:03.929 "compare_and_write": true, 00:16:03.929 "abort": true, 00:16:03.929 "nvme_admin": true, 00:16:03.929 "nvme_io": true 00:16:03.929 }, 00:16:03.929 "memory_domains": [ 00:16:03.929 { 00:16:03.929 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:03.929 "dma_device_type": 0 00:16:03.929 } 00:16:03.929 ], 00:16:03.929 "driver_specific": { 00:16:03.929 "nvme": [ 00:16:03.929 { 00:16:03.929 "trid": { 00:16:03.929 "trtype": "RDMA", 00:16:03.929 "adrfam": "IPv4", 00:16:03.929 "traddr": "192.168.100.8", 00:16:03.929 "trsvcid": "4420", 00:16:03.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:03.929 }, 00:16:03.929 "ctrlr_data": { 00:16:03.929 "cntlid": 1, 00:16:03.929 "vendor_id": "0x8086", 00:16:03.929 "model_number": "SPDK bdev Controller", 00:16:03.929 "serial_number": "SPDK0", 00:16:03.929 "firmware_revision": "24.01.1", 00:16:03.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.929 "oacs": { 00:16:03.929 "security": 0, 00:16:03.929 "format": 0, 00:16:03.929 "firmware": 0, 00:16:03.929 "ns_manage": 0 00:16:03.929 }, 00:16:03.929 "multi_ctrlr": true, 00:16:03.929 "ana_reporting": false 00:16:03.929 }, 00:16:03.929 "vs": { 00:16:03.929 "nvme_version": "1.3" 00:16:03.929 }, 00:16:03.929 "ns_data": { 00:16:03.929 "id": 1, 00:16:03.929 "can_share": true 00:16:03.929 } 00:16:03.929 } 00:16:03.929 ], 00:16:03.929 "mp_policy": "active_passive" 00:16:03.929 } 00:16:03.929 } 00:16:03.929 ] 00:16:03.929 05:14:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=258168 00:16:03.929 05:14:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:03.929 05:14:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:03.929 Running I/O for 10 seconds... 00:16:05.309 Latency(us) 00:16:05.309 [2024-11-20T04:14:02.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.309 [2024-11-20T04:14:02.137Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.309 Nvme0n1 : 1.00 36124.00 141.11 0.00 0.00 0.00 0.00 0.00 00:16:05.309 [2024-11-20T04:14:02.137Z] =================================================================================================================== 00:16:05.309 [2024-11-20T04:14:02.137Z] Total : 36124.00 141.11 0.00 0.00 0.00 0.00 0.00 00:16:05.309 00:16:05.878 05:14:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:05.878 [2024-11-20T04:14:02.706Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.878 Nvme0n1 : 2.00 36578.50 142.88 0.00 0.00 0.00 0.00 0.00 00:16:05.878 [2024-11-20T04:14:02.706Z] =================================================================================================================== 00:16:05.878 [2024-11-20T04:14:02.706Z] Total : 36578.50 142.88 0.00 0.00 0.00 0.00 0.00 00:16:05.878 00:16:06.138 true 00:16:06.138 05:14:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:06.138 05:14:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:06.398 05:14:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:06.398 05:14:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:06.398 05:14:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 258168 00:16:06.967 [2024-11-20T04:14:03.795Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.967 Nvme0n1 : 3.00 36735.33 143.50 0.00 0.00 0.00 0.00 0.00 00:16:06.967 [2024-11-20T04:14:03.795Z] =================================================================================================================== 00:16:06.967 [2024-11-20T04:14:03.795Z] Total : 36735.33 143.50 0.00 0.00 0.00 0.00 0.00 00:16:06.967 00:16:07.906 [2024-11-20T04:14:04.734Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.906 Nvme0n1 : 4.00 36872.75 144.03 0.00 0.00 0.00 0.00 0.00 00:16:07.906 [2024-11-20T04:14:04.734Z] =================================================================================================================== 00:16:07.906 [2024-11-20T04:14:04.734Z] Total : 36872.75 144.03 0.00 0.00 0.00 0.00 0.00 00:16:07.906 00:16:09.287 [2024-11-20T04:14:06.115Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.287 Nvme0n1 : 5.00 36954.40 144.35 0.00 0.00 0.00 0.00 0.00 00:16:09.287 [2024-11-20T04:14:06.115Z] =================================================================================================================== 00:16:09.287 [2024-11-20T04:14:06.115Z] Total : 36954.40 144.35 0.00 0.00 0.00 0.00 0.00 00:16:09.287 00:16:10.225 [2024-11-20T04:14:07.053Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.225 Nvme0n1 : 6.00 36986.50 144.48 0.00 0.00 0.00 0.00 0.00 00:16:10.225 [2024-11-20T04:14:07.053Z] =================================================================================================================== 00:16:10.225 [2024-11-20T04:14:07.053Z] Total : 36986.50 144.48 0.00 0.00 0.00 0.00 0.00 00:16:10.225 00:16:11.163 [2024-11-20T04:14:07.991Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.163 Nvme0n1 : 7.00 37037.57 144.68 0.00 0.00 0.00 0.00 0.00 00:16:11.163 [2024-11-20T04:14:07.991Z] =================================================================================================================== 00:16:11.163 [2024-11-20T04:14:07.991Z] Total : 37037.57 144.68 0.00 0.00 0.00 0.00 0.00 00:16:11.163 00:16:12.102 [2024-11-20T04:14:08.930Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:12.102 Nvme0n1 : 8.00 37076.50 144.83 0.00 0.00 0.00 0.00 0.00 00:16:12.102 [2024-11-20T04:14:08.930Z] =================================================================================================================== 00:16:12.102 [2024-11-20T04:14:08.930Z] Total : 37076.50 144.83 0.00 0.00 0.00 0.00 0.00 00:16:12.102 00:16:13.038 [2024-11-20T04:14:09.867Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.039 Nvme0n1 : 9.00 37112.56 144.97 0.00 0.00 0.00 0.00 0.00 00:16:13.039 [2024-11-20T04:14:09.867Z] =================================================================================================================== 00:16:13.039 [2024-11-20T04:14:09.867Z] Total : 37112.56 144.97 0.00 0.00 0.00 0.00 0.00 00:16:13.039 00:16:13.975 [2024-11-20T04:14:10.803Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.975 Nvme0n1 : 10.00 37106.90 144.95 0.00 0.00 0.00 0.00 0.00 00:16:13.975 [2024-11-20T04:14:10.803Z] =================================================================================================================== 00:16:13.975 [2024-11-20T04:14:10.803Z] Total : 37106.90 144.95 0.00 0.00 0.00 0.00 0.00 00:16:13.975 00:16:13.975 00:16:13.975 Latency(us) 00:16:13.975 [2024-11-20T04:14:10.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.975 [2024-11-20T04:14:10.803Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.975 Nvme0n1 : 10.00 37106.01 144.95 0.00 0.00 3446.74 2090.91 17351.44 00:16:13.975 [2024-11-20T04:14:10.803Z] =================================================================================================================== 00:16:13.975 [2024-11-20T04:14:10.803Z] Total : 37106.01 144.95 0.00 0.00 3446.74 2090.91 17351.44 00:16:13.975 0 00:16:13.975 05:14:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 257938 00:16:13.975 05:14:10 -- common/autotest_common.sh@936 -- # '[' -z 257938 ']' 00:16:13.975 05:14:10 -- common/autotest_common.sh@940 -- # kill -0 257938 00:16:13.975 05:14:10 -- common/autotest_common.sh@941 -- # uname 00:16:13.975 05:14:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.975 05:14:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 257938 00:16:13.975 05:14:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:13.975 05:14:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:13.975 05:14:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 257938' 00:16:13.975 killing process with pid 257938 00:16:13.975 05:14:10 -- common/autotest_common.sh@955 -- # kill 257938 00:16:13.975 Received shutdown signal, test time was about 10.000000 seconds 00:16:13.975 00:16:13.975 Latency(us) 00:16:13.975 [2024-11-20T04:14:10.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.975 [2024-11-20T04:14:10.804Z] =================================================================================================================== 00:16:13.976 [2024-11-20T04:14:10.804Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.976 05:14:10 -- common/autotest_common.sh@960 -- # wait 257938 00:16:14.235 05:14:11 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:14.494 05:14:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:14.494 05:14:11 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:14.753 05:14:11 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:14.753 05:14:11 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:14.753 05:14:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:14.753 [2024-11-20 05:14:11.574898] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:15.013 05:14:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:15.013 05:14:11 -- common/autotest_common.sh@650 -- # local es=0 00:16:15.013 05:14:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:15.013 05:14:11 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:15.013 05:14:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.013 05:14:11 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:15.013 05:14:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.013 05:14:11 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:15.013 05:14:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.013 05:14:11 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:15.013 05:14:11 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:16:15.013 05:14:11 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:15.013 request: 00:16:15.013 { 00:16:15.013 "uuid": "264125ea-b58b-4480-8c6b-68bcb96af739", 00:16:15.013 "method": "bdev_lvol_get_lvstores", 00:16:15.013 "req_id": 1 00:16:15.013 } 00:16:15.013 Got JSON-RPC error response 00:16:15.013 response: 00:16:15.013 { 00:16:15.013 "code": -19, 00:16:15.013 "message": "No such device" 00:16:15.013 } 00:16:15.013 05:14:11 -- common/autotest_common.sh@653 -- # es=1 00:16:15.013 05:14:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:15.013 05:14:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:15.013 05:14:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:15.013 05:14:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:15.273 aio_bdev 00:16:15.273 05:14:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a29348b1-b6db-4f35-b530-f4fcd02c38ac 00:16:15.273 05:14:11 -- common/autotest_common.sh@897 -- # local bdev_name=a29348b1-b6db-4f35-b530-f4fcd02c38ac 00:16:15.273 05:14:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:15.273 05:14:11 -- common/autotest_common.sh@899 -- # local i 00:16:15.273 05:14:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:15.273 05:14:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:15.273 05:14:11 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:15.533 05:14:12 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a29348b1-b6db-4f35-b530-f4fcd02c38ac -t 2000 00:16:15.533 [ 00:16:15.533 { 00:16:15.533 "name": "a29348b1-b6db-4f35-b530-f4fcd02c38ac", 00:16:15.533 "aliases": [ 00:16:15.533 "lvs/lvol" 00:16:15.533 ], 00:16:15.533 "product_name": "Logical Volume", 00:16:15.533 "block_size": 4096, 00:16:15.533 "num_blocks": 38912, 00:16:15.533 "uuid": "a29348b1-b6db-4f35-b530-f4fcd02c38ac", 00:16:15.533 "assigned_rate_limits": { 00:16:15.533 "rw_ios_per_sec": 0, 00:16:15.533 "rw_mbytes_per_sec": 0, 00:16:15.533 "r_mbytes_per_sec": 0, 00:16:15.533 "w_mbytes_per_sec": 0 00:16:15.533 }, 00:16:15.533 "claimed": false, 00:16:15.533 "zoned": false, 00:16:15.533 "supported_io_types": { 00:16:15.533 "read": true, 00:16:15.533 "write": true, 00:16:15.533 "unmap": true, 00:16:15.533 "write_zeroes": true, 00:16:15.533 "flush": false, 00:16:15.533 "reset": true, 00:16:15.533 "compare": false, 00:16:15.533 "compare_and_write": false, 00:16:15.533 "abort": false, 00:16:15.533 "nvme_admin": false, 00:16:15.533 "nvme_io": false 00:16:15.533 }, 00:16:15.533 "driver_specific": { 00:16:15.533 "lvol": { 00:16:15.533 "lvol_store_uuid": "264125ea-b58b-4480-8c6b-68bcb96af739", 00:16:15.533 "base_bdev": "aio_bdev", 00:16:15.533 "thin_provision": false, 00:16:15.533 "snapshot": false, 00:16:15.533 "clone": false, 00:16:15.533 "esnap_clone": false 00:16:15.533 } 00:16:15.533 } 00:16:15.533 } 00:16:15.533 ] 00:16:15.533 05:14:12 -- common/autotest_common.sh@905 -- # return 0 00:16:15.533 05:14:12 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:15.533 05:14:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:15.793 05:14:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:15.793 05:14:12 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:15.793 05:14:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:16.052 05:14:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:16.052 05:14:12 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a29348b1-b6db-4f35-b530-f4fcd02c38ac 00:16:16.052 05:14:12 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 264125ea-b58b-4480-8c6b-68bcb96af739 00:16:16.312 05:14:13 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.572 00:16:16.572 real 0m15.653s 00:16:16.572 user 0m15.807s 00:16:16.572 sys 0m0.943s 00:16:16.572 05:14:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:16.572 05:14:13 -- common/autotest_common.sh@10 -- # set +x 00:16:16.572 ************************************ 00:16:16.572 END TEST lvs_grow_clean 00:16:16.572 ************************************ 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:16.572 05:14:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:16.572 05:14:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.572 05:14:13 -- common/autotest_common.sh@10 -- # set +x 00:16:16.572 ************************************ 00:16:16.572 START TEST lvs_grow_dirty 00:16:16.572 ************************************ 00:16:16.572 05:14:13 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.572 05:14:13 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:16.832 05:14:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:16.832 05:14:13 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:16.832 05:14:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:16.832 05:14:13 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:16.832 05:14:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:17.093 05:14:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:17.093 05:14:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:17.093 05:14:13 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 lvol 150 00:16:17.353 05:14:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=664345d3-b2d6-4059-b314-a78eda4315bf 00:16:17.353 05:14:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:17.353 05:14:13 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:17.353 [2024-11-20 05:14:14.151580] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:17.353 [2024-11-20 05:14:14.151629] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:17.353 true 00:16:17.353 05:14:14 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:17.353 05:14:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:17.612 05:14:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:17.613 05:14:14 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:17.872 05:14:14 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 664345d3-b2d6-4059-b314-a78eda4315bf 00:16:17.872 05:14:14 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:18.132 05:14:14 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:18.392 05:14:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=260536 00:16:18.392 05:14:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:18.392 05:14:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 260536 /var/tmp/bdevperf.sock 00:16:18.392 05:14:15 -- common/autotest_common.sh@829 -- # '[' -z 260536 ']' 00:16:18.392 05:14:15 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:18.392 05:14:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.392 05:14:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.392 05:14:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.392 05:14:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.392 05:14:15 -- common/autotest_common.sh@10 -- # set +x 00:16:18.392 [2024-11-20 05:14:15.052338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:18.392 [2024-11-20 05:14:15.052387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid260536 ] 00:16:18.392 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.392 [2024-11-20 05:14:15.105973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.392 [2024-11-20 05:14:15.180497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.334 05:14:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.334 05:14:15 -- common/autotest_common.sh@862 -- # return 0 00:16:19.334 05:14:15 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:19.334 Nvme0n1 00:16:19.334 05:14:16 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:19.593 [ 00:16:19.594 { 00:16:19.594 "name": "Nvme0n1", 00:16:19.594 "aliases": [ 00:16:19.594 "664345d3-b2d6-4059-b314-a78eda4315bf" 00:16:19.594 ], 00:16:19.594 "product_name": "NVMe disk", 00:16:19.594 "block_size": 4096, 00:16:19.594 "num_blocks": 38912, 00:16:19.594 "uuid": "664345d3-b2d6-4059-b314-a78eda4315bf", 00:16:19.594 "assigned_rate_limits": { 00:16:19.594 "rw_ios_per_sec": 0, 00:16:19.594 "rw_mbytes_per_sec": 0, 00:16:19.594 "r_mbytes_per_sec": 0, 00:16:19.594 "w_mbytes_per_sec": 0 00:16:19.594 }, 00:16:19.594 "claimed": false, 00:16:19.594 "zoned": false, 00:16:19.594 "supported_io_types": { 00:16:19.594 "read": true, 00:16:19.594 "write": true, 00:16:19.594 "unmap": true, 00:16:19.594 "write_zeroes": true, 00:16:19.594 "flush": true, 00:16:19.594 "reset": true, 00:16:19.594 "compare": true, 00:16:19.594 "compare_and_write": true, 00:16:19.594 "abort": true, 00:16:19.594 "nvme_admin": true, 00:16:19.594 "nvme_io": true 00:16:19.594 }, 00:16:19.594 "memory_domains": [ 00:16:19.594 { 00:16:19.594 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:19.594 "dma_device_type": 0 00:16:19.594 } 00:16:19.594 ], 00:16:19.594 "driver_specific": { 00:16:19.594 "nvme": [ 00:16:19.594 { 00:16:19.594 "trid": { 00:16:19.594 "trtype": "RDMA", 00:16:19.594 "adrfam": "IPv4", 00:16:19.594 "traddr": "192.168.100.8", 00:16:19.594 "trsvcid": "4420", 00:16:19.594 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:19.594 }, 00:16:19.594 "ctrlr_data": { 00:16:19.594 "cntlid": 1, 00:16:19.594 "vendor_id": "0x8086", 00:16:19.594 "model_number": "SPDK bdev Controller", 00:16:19.594 "serial_number": "SPDK0", 00:16:19.594 "firmware_revision": "24.01.1", 00:16:19.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:19.594 "oacs": { 00:16:19.594 "security": 0, 00:16:19.594 "format": 0, 00:16:19.594 "firmware": 0, 00:16:19.594 "ns_manage": 0 00:16:19.594 }, 00:16:19.594 "multi_ctrlr": true, 00:16:19.594 "ana_reporting": false 00:16:19.594 }, 00:16:19.594 "vs": { 00:16:19.594 "nvme_version": "1.3" 00:16:19.594 }, 00:16:19.594 "ns_data": { 00:16:19.594 "id": 1, 00:16:19.594 "can_share": true 00:16:19.594 } 00:16:19.594 } 00:16:19.594 ], 00:16:19.594 "mp_policy": "active_passive" 00:16:19.594 } 00:16:19.594 } 00:16:19.594 ] 00:16:19.594 05:14:16 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=260773 00:16:19.594 05:14:16 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:19.594 05:14:16 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:19.594 Running I/O for 10 seconds... 00:16:20.533 Latency(us) 00:16:20.533 [2024-11-20T04:14:17.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.533 [2024-11-20T04:14:17.361Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.533 Nvme0n1 : 1.00 36455.00 142.40 0.00 0.00 0.00 0.00 0.00 00:16:20.533 [2024-11-20T04:14:17.361Z] =================================================================================================================== 00:16:20.533 [2024-11-20T04:14:17.361Z] Total : 36455.00 142.40 0.00 0.00 0.00 0.00 0.00 00:16:20.533 00:16:21.475 05:14:18 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:21.735 [2024-11-20T04:14:18.563Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.735 Nvme0n1 : 2.00 36771.00 143.64 0.00 0.00 0.00 0.00 0.00 00:16:21.735 [2024-11-20T04:14:18.563Z] =================================================================================================================== 00:16:21.735 [2024-11-20T04:14:18.563Z] Total : 36771.00 143.64 0.00 0.00 0.00 0.00 0.00 00:16:21.735 00:16:21.735 true 00:16:21.735 05:14:18 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:21.735 05:14:18 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:21.995 05:14:18 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:21.995 05:14:18 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:21.995 05:14:18 -- target/nvmf_lvs_grow.sh@65 -- # wait 260773 00:16:22.564 [2024-11-20T04:14:19.392Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.564 Nvme0n1 : 3.00 36895.33 144.12 0.00 0.00 0.00 0.00 0.00 00:16:22.564 [2024-11-20T04:14:19.392Z] =================================================================================================================== 00:16:22.564 [2024-11-20T04:14:19.392Z] Total : 36895.33 144.12 0.00 0.00 0.00 0.00 0.00 00:16:22.564 00:16:23.941 [2024-11-20T04:14:20.769Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.941 Nvme0n1 : 4.00 36799.50 143.75 0.00 0.00 0.00 0.00 0.00 00:16:23.941 [2024-11-20T04:14:20.769Z] =================================================================================================================== 00:16:23.941 [2024-11-20T04:14:20.769Z] Total : 36799.50 143.75 0.00 0.00 0.00 0.00 0.00 00:16:23.941 00:16:24.880 [2024-11-20T04:14:21.708Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.880 Nvme0n1 : 5.00 36903.40 144.15 0.00 0.00 0.00 0.00 0.00 00:16:24.880 [2024-11-20T04:14:21.708Z] =================================================================================================================== 00:16:24.880 [2024-11-20T04:14:21.708Z] Total : 36903.40 144.15 0.00 0.00 0.00 0.00 0.00 00:16:24.880 00:16:25.819 [2024-11-20T04:14:22.647Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.819 Nvme0n1 : 6.00 36987.17 144.48 0.00 0.00 0.00 0.00 0.00 00:16:25.819 [2024-11-20T04:14:22.647Z] =================================================================================================================== 00:16:25.819 [2024-11-20T04:14:22.647Z] Total : 36987.17 144.48 0.00 0.00 0.00 0.00 0.00 00:16:25.819 00:16:26.758 [2024-11-20T04:14:23.586Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.758 Nvme0n1 : 7.00 37051.86 144.73 0.00 0.00 0.00 0.00 0.00 00:16:26.758 [2024-11-20T04:14:23.586Z] =================================================================================================================== 00:16:26.758 [2024-11-20T04:14:23.586Z] Total : 37051.86 144.73 0.00 0.00 0.00 0.00 0.00 00:16:26.758 00:16:27.699 [2024-11-20T04:14:24.527Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.699 Nvme0n1 : 8.00 37083.75 144.86 0.00 0.00 0.00 0.00 0.00 00:16:27.699 [2024-11-20T04:14:24.527Z] =================================================================================================================== 00:16:27.699 [2024-11-20T04:14:24.527Z] Total : 37083.75 144.86 0.00 0.00 0.00 0.00 0.00 00:16:27.699 00:16:28.637 [2024-11-20T04:14:25.465Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.637 Nvme0n1 : 9.00 37117.11 144.99 0.00 0.00 0.00 0.00 0.00 00:16:28.637 [2024-11-20T04:14:25.465Z] =================================================================================================================== 00:16:28.637 [2024-11-20T04:14:25.465Z] Total : 37117.11 144.99 0.00 0.00 0.00 0.00 0.00 00:16:28.637 00:16:29.576 [2024-11-20T04:14:26.404Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.576 Nvme0n1 : 10.00 37149.20 145.11 0.00 0.00 0.00 0.00 0.00 00:16:29.576 [2024-11-20T04:14:26.404Z] =================================================================================================================== 00:16:29.576 [2024-11-20T04:14:26.404Z] Total : 37149.20 145.11 0.00 0.00 0.00 0.00 0.00 00:16:29.576 00:16:29.576 00:16:29.576 Latency(us) 00:16:29.576 [2024-11-20T04:14:26.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.576 [2024-11-20T04:14:26.404Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.576 Nvme0n1 : 10.00 37150.67 145.12 0.00 0.00 3442.66 2356.18 10985.08 00:16:29.576 [2024-11-20T04:14:26.404Z] =================================================================================================================== 00:16:29.576 [2024-11-20T04:14:26.404Z] Total : 37150.67 145.12 0.00 0.00 3442.66 2356.18 10985.08 00:16:29.576 0 00:16:29.576 05:14:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 260536 00:16:29.576 05:14:26 -- common/autotest_common.sh@936 -- # '[' -z 260536 ']' 00:16:29.576 05:14:26 -- common/autotest_common.sh@940 -- # kill -0 260536 00:16:29.576 05:14:26 -- common/autotest_common.sh@941 -- # uname 00:16:29.576 05:14:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.835 05:14:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 260536 00:16:29.835 05:14:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:29.835 05:14:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:29.835 05:14:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 260536' 00:16:29.835 killing process with pid 260536 00:16:29.835 05:14:26 -- common/autotest_common.sh@955 -- # kill 260536 00:16:29.835 Received shutdown signal, test time was about 10.000000 seconds 00:16:29.835 00:16:29.835 Latency(us) 00:16:29.835 [2024-11-20T04:14:26.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.835 [2024-11-20T04:14:26.663Z] =================================================================================================================== 00:16:29.835 [2024-11-20T04:14:26.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.835 05:14:26 -- common/autotest_common.sh@960 -- # wait 260536 00:16:30.095 05:14:26 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:30.095 05:14:26 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:30.095 05:14:26 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:30.354 05:14:27 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:30.354 05:14:27 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:30.354 05:14:27 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 257430 00:16:30.354 05:14:27 -- target/nvmf_lvs_grow.sh@74 -- # wait 257430 00:16:30.355 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 257430 Killed "${NVMF_APP[@]}" "$@" 00:16:30.355 05:14:27 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:30.355 05:14:27 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:30.355 05:14:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:30.355 05:14:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.355 05:14:27 -- common/autotest_common.sh@10 -- # set +x 00:16:30.355 05:14:27 -- nvmf/common.sh@469 -- # nvmfpid=262629 00:16:30.355 05:14:27 -- nvmf/common.sh@470 -- # waitforlisten 262629 00:16:30.355 05:14:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:30.355 05:14:27 -- common/autotest_common.sh@829 -- # '[' -z 262629 ']' 00:16:30.355 05:14:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.355 05:14:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.355 05:14:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.355 05:14:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.355 05:14:27 -- common/autotest_common.sh@10 -- # set +x 00:16:30.355 [2024-11-20 05:14:27.112046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:30.355 [2024-11-20 05:14:27.112104] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.355 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.355 [2024-11-20 05:14:27.168737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.614 [2024-11-20 05:14:27.244349] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:30.614 [2024-11-20 05:14:27.244453] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.614 [2024-11-20 05:14:27.244460] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.614 [2024-11-20 05:14:27.244466] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.614 [2024-11-20 05:14:27.244487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.184 05:14:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.184 05:14:27 -- common/autotest_common.sh@862 -- # return 0 00:16:31.184 05:14:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:31.184 05:14:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.184 05:14:27 -- common/autotest_common.sh@10 -- # set +x 00:16:31.184 05:14:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.184 05:14:27 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:31.443 [2024-11-20 05:14:28.120797] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:31.444 [2024-11-20 05:14:28.120889] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:31.444 [2024-11-20 05:14:28.120915] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:31.444 05:14:28 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:31.444 05:14:28 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 664345d3-b2d6-4059-b314-a78eda4315bf 00:16:31.444 05:14:28 -- common/autotest_common.sh@897 -- # local bdev_name=664345d3-b2d6-4059-b314-a78eda4315bf 00:16:31.444 05:14:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:31.444 05:14:28 -- common/autotest_common.sh@899 -- # local i 00:16:31.444 05:14:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:31.444 05:14:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:31.444 05:14:28 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:31.703 05:14:28 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 664345d3-b2d6-4059-b314-a78eda4315bf -t 2000 00:16:31.703 [ 00:16:31.703 { 00:16:31.703 "name": "664345d3-b2d6-4059-b314-a78eda4315bf", 00:16:31.703 "aliases": [ 00:16:31.703 "lvs/lvol" 00:16:31.703 ], 00:16:31.703 "product_name": "Logical Volume", 00:16:31.703 "block_size": 4096, 00:16:31.703 "num_blocks": 38912, 00:16:31.703 "uuid": "664345d3-b2d6-4059-b314-a78eda4315bf", 00:16:31.703 "assigned_rate_limits": { 00:16:31.703 "rw_ios_per_sec": 0, 00:16:31.703 "rw_mbytes_per_sec": 0, 00:16:31.703 "r_mbytes_per_sec": 0, 00:16:31.703 "w_mbytes_per_sec": 0 00:16:31.703 }, 00:16:31.703 "claimed": false, 00:16:31.703 "zoned": false, 00:16:31.703 "supported_io_types": { 00:16:31.703 "read": true, 00:16:31.703 "write": true, 00:16:31.703 "unmap": true, 00:16:31.703 "write_zeroes": true, 00:16:31.703 "flush": false, 00:16:31.703 "reset": true, 00:16:31.703 "compare": false, 00:16:31.703 "compare_and_write": false, 00:16:31.703 "abort": false, 00:16:31.703 "nvme_admin": false, 00:16:31.703 "nvme_io": false 00:16:31.703 }, 00:16:31.703 "driver_specific": { 00:16:31.703 "lvol": { 00:16:31.703 "lvol_store_uuid": "a1890f17-ec4c-4940-8af4-17d5c34ef0c4", 00:16:31.703 "base_bdev": "aio_bdev", 00:16:31.703 "thin_provision": false, 00:16:31.703 "snapshot": false, 00:16:31.703 "clone": false, 00:16:31.703 "esnap_clone": false 00:16:31.703 } 00:16:31.703 } 00:16:31.703 } 00:16:31.703 ] 00:16:31.704 05:14:28 -- common/autotest_common.sh@905 -- # return 0 00:16:31.704 05:14:28 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:31.704 05:14:28 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:31.963 05:14:28 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:31.963 05:14:28 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:31.963 05:14:28 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:32.223 05:14:28 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:32.223 05:14:28 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:32.223 [2024-11-20 05:14:28.993615] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:32.223 05:14:29 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:32.223 05:14:29 -- common/autotest_common.sh@650 -- # local es=0 00:16:32.223 05:14:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:32.223 05:14:29 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:32.223 05:14:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.223 05:14:29 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:32.223 05:14:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.223 05:14:29 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:32.223 05:14:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.223 05:14:29 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:32.223 05:14:29 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:16:32.223 05:14:29 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:32.482 request: 00:16:32.482 { 00:16:32.482 "uuid": "a1890f17-ec4c-4940-8af4-17d5c34ef0c4", 00:16:32.482 "method": "bdev_lvol_get_lvstores", 00:16:32.482 "req_id": 1 00:16:32.482 } 00:16:32.482 Got JSON-RPC error response 00:16:32.482 response: 00:16:32.482 { 00:16:32.482 "code": -19, 00:16:32.482 "message": "No such device" 00:16:32.482 } 00:16:32.482 05:14:29 -- common/autotest_common.sh@653 -- # es=1 00:16:32.482 05:14:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:32.482 05:14:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:32.482 05:14:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:32.482 05:14:29 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:32.741 aio_bdev 00:16:32.741 05:14:29 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 664345d3-b2d6-4059-b314-a78eda4315bf 00:16:32.741 05:14:29 -- common/autotest_common.sh@897 -- # local bdev_name=664345d3-b2d6-4059-b314-a78eda4315bf 00:16:32.741 05:14:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:32.741 05:14:29 -- common/autotest_common.sh@899 -- # local i 00:16:32.741 05:14:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:32.742 05:14:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:32.742 05:14:29 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:32.742 05:14:29 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 664345d3-b2d6-4059-b314-a78eda4315bf -t 2000 00:16:33.001 [ 00:16:33.001 { 00:16:33.001 "name": "664345d3-b2d6-4059-b314-a78eda4315bf", 00:16:33.001 "aliases": [ 00:16:33.001 "lvs/lvol" 00:16:33.001 ], 00:16:33.001 "product_name": "Logical Volume", 00:16:33.001 "block_size": 4096, 00:16:33.001 "num_blocks": 38912, 00:16:33.001 "uuid": "664345d3-b2d6-4059-b314-a78eda4315bf", 00:16:33.001 "assigned_rate_limits": { 00:16:33.001 "rw_ios_per_sec": 0, 00:16:33.001 "rw_mbytes_per_sec": 0, 00:16:33.001 "r_mbytes_per_sec": 0, 00:16:33.001 "w_mbytes_per_sec": 0 00:16:33.001 }, 00:16:33.001 "claimed": false, 00:16:33.001 "zoned": false, 00:16:33.001 "supported_io_types": { 00:16:33.001 "read": true, 00:16:33.001 "write": true, 00:16:33.001 "unmap": true, 00:16:33.001 "write_zeroes": true, 00:16:33.001 "flush": false, 00:16:33.001 "reset": true, 00:16:33.001 "compare": false, 00:16:33.001 "compare_and_write": false, 00:16:33.001 "abort": false, 00:16:33.001 "nvme_admin": false, 00:16:33.001 "nvme_io": false 00:16:33.001 }, 00:16:33.001 "driver_specific": { 00:16:33.001 "lvol": { 00:16:33.001 "lvol_store_uuid": "a1890f17-ec4c-4940-8af4-17d5c34ef0c4", 00:16:33.001 "base_bdev": "aio_bdev", 00:16:33.001 "thin_provision": false, 00:16:33.001 "snapshot": false, 00:16:33.001 "clone": false, 00:16:33.001 "esnap_clone": false 00:16:33.001 } 00:16:33.001 } 00:16:33.001 } 00:16:33.001 ] 00:16:33.001 05:14:29 -- common/autotest_common.sh@905 -- # return 0 00:16:33.001 05:14:29 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:33.001 05:14:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:33.261 05:14:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:33.261 05:14:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:33.261 05:14:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:33.261 05:14:30 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:33.261 05:14:30 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 664345d3-b2d6-4059-b314-a78eda4315bf 00:16:33.521 05:14:30 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1890f17-ec4c-4940-8af4-17d5c34ef0c4 00:16:33.779 05:14:30 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:33.779 05:14:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:34.039 00:16:34.039 real 0m17.370s 00:16:34.039 user 0m45.552s 00:16:34.039 sys 0m2.747s 00:16:34.039 05:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:34.039 05:14:30 -- common/autotest_common.sh@10 -- # set +x 00:16:34.039 ************************************ 00:16:34.039 END TEST lvs_grow_dirty 00:16:34.039 ************************************ 00:16:34.039 05:14:30 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:34.039 05:14:30 -- common/autotest_common.sh@806 -- # type=--id 00:16:34.039 05:14:30 -- common/autotest_common.sh@807 -- # id=0 00:16:34.039 05:14:30 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:34.039 05:14:30 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:34.039 05:14:30 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:34.039 05:14:30 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:34.039 05:14:30 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:34.039 05:14:30 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:34.039 nvmf_trace.0 00:16:34.039 05:14:30 -- common/autotest_common.sh@821 -- # return 0 00:16:34.039 05:14:30 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:34.039 05:14:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:34.039 05:14:30 -- nvmf/common.sh@116 -- # sync 00:16:34.039 05:14:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:34.039 05:14:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:34.039 05:14:30 -- nvmf/common.sh@119 -- # set +e 00:16:34.039 05:14:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:34.039 05:14:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:34.039 rmmod nvme_rdma 00:16:34.039 rmmod nvme_fabrics 00:16:34.039 05:14:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:34.039 05:14:30 -- nvmf/common.sh@123 -- # set -e 00:16:34.039 05:14:30 -- nvmf/common.sh@124 -- # return 0 00:16:34.039 05:14:30 -- nvmf/common.sh@477 -- # '[' -n 262629 ']' 00:16:34.039 05:14:30 -- nvmf/common.sh@478 -- # killprocess 262629 00:16:34.039 05:14:30 -- common/autotest_common.sh@936 -- # '[' -z 262629 ']' 00:16:34.039 05:14:30 -- common/autotest_common.sh@940 -- # kill -0 262629 00:16:34.039 05:14:30 -- common/autotest_common.sh@941 -- # uname 00:16:34.040 05:14:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.040 05:14:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 262629 00:16:34.040 05:14:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.040 05:14:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.040 05:14:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 262629' 00:16:34.040 killing process with pid 262629 00:16:34.040 05:14:30 -- common/autotest_common.sh@955 -- # kill 262629 00:16:34.040 05:14:30 -- common/autotest_common.sh@960 -- # wait 262629 00:16:34.300 05:14:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:34.300 05:14:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:34.300 00:16:34.300 real 0m39.590s 00:16:34.300 user 1m7.095s 00:16:34.300 sys 0m7.760s 00:16:34.300 05:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:34.300 05:14:30 -- common/autotest_common.sh@10 -- # set +x 00:16:34.300 ************************************ 00:16:34.300 END TEST nvmf_lvs_grow 00:16:34.300 ************************************ 00:16:34.300 05:14:31 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:16:34.300 05:14:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:34.300 05:14:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:34.300 05:14:31 -- common/autotest_common.sh@10 -- # set +x 00:16:34.300 ************************************ 00:16:34.300 START TEST nvmf_bdev_io_wait 00:16:34.300 ************************************ 00:16:34.300 05:14:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:16:34.300 * Looking for test storage... 00:16:34.300 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:34.300 05:14:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:34.300 05:14:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:34.300 05:14:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:34.560 05:14:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:34.560 05:14:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:34.560 05:14:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:34.560 05:14:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:34.560 05:14:31 -- scripts/common.sh@335 -- # IFS=.-: 00:16:34.560 05:14:31 -- scripts/common.sh@335 -- # read -ra ver1 00:16:34.560 05:14:31 -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.560 05:14:31 -- scripts/common.sh@336 -- # read -ra ver2 00:16:34.560 05:14:31 -- scripts/common.sh@337 -- # local 'op=<' 00:16:34.560 05:14:31 -- scripts/common.sh@339 -- # ver1_l=2 00:16:34.560 05:14:31 -- scripts/common.sh@340 -- # ver2_l=1 00:16:34.560 05:14:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:34.560 05:14:31 -- scripts/common.sh@343 -- # case "$op" in 00:16:34.560 05:14:31 -- scripts/common.sh@344 -- # : 1 00:16:34.560 05:14:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:34.560 05:14:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.560 05:14:31 -- scripts/common.sh@364 -- # decimal 1 00:16:34.560 05:14:31 -- scripts/common.sh@352 -- # local d=1 00:16:34.560 05:14:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.560 05:14:31 -- scripts/common.sh@354 -- # echo 1 00:16:34.560 05:14:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:34.560 05:14:31 -- scripts/common.sh@365 -- # decimal 2 00:16:34.560 05:14:31 -- scripts/common.sh@352 -- # local d=2 00:16:34.560 05:14:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.560 05:14:31 -- scripts/common.sh@354 -- # echo 2 00:16:34.560 05:14:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:34.560 05:14:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:34.560 05:14:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:34.560 05:14:31 -- scripts/common.sh@367 -- # return 0 00:16:34.560 05:14:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.560 05:14:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.560 --rc genhtml_branch_coverage=1 00:16:34.560 --rc genhtml_function_coverage=1 00:16:34.560 --rc genhtml_legend=1 00:16:34.560 --rc geninfo_all_blocks=1 00:16:34.560 --rc geninfo_unexecuted_blocks=1 00:16:34.560 00:16:34.560 ' 00:16:34.560 05:14:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.560 --rc genhtml_branch_coverage=1 00:16:34.560 --rc genhtml_function_coverage=1 00:16:34.560 --rc genhtml_legend=1 00:16:34.560 --rc geninfo_all_blocks=1 00:16:34.560 --rc geninfo_unexecuted_blocks=1 00:16:34.560 00:16:34.560 ' 00:16:34.560 05:14:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.560 --rc genhtml_branch_coverage=1 00:16:34.560 --rc genhtml_function_coverage=1 00:16:34.560 --rc genhtml_legend=1 00:16:34.560 --rc geninfo_all_blocks=1 00:16:34.560 --rc geninfo_unexecuted_blocks=1 00:16:34.560 00:16:34.560 ' 00:16:34.560 05:14:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.560 --rc genhtml_branch_coverage=1 00:16:34.560 --rc genhtml_function_coverage=1 00:16:34.560 --rc genhtml_legend=1 00:16:34.560 --rc geninfo_all_blocks=1 00:16:34.560 --rc geninfo_unexecuted_blocks=1 00:16:34.560 00:16:34.560 ' 00:16:34.560 05:14:31 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.560 05:14:31 -- nvmf/common.sh@7 -- # uname -s 00:16:34.560 05:14:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.560 05:14:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.560 05:14:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.560 05:14:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.560 05:14:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.560 05:14:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.560 05:14:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.560 05:14:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.560 05:14:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.560 05:14:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.561 05:14:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:34.561 05:14:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:34.561 05:14:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.561 05:14:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.561 05:14:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:34.561 05:14:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:34.561 05:14:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.561 05:14:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.561 05:14:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.561 05:14:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.561 05:14:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.561 05:14:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.561 05:14:31 -- paths/export.sh@5 -- # export PATH 00:16:34.561 05:14:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.561 05:14:31 -- nvmf/common.sh@46 -- # : 0 00:16:34.561 05:14:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:34.561 05:14:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:34.561 05:14:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:34.561 05:14:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.561 05:14:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.561 05:14:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:34.561 05:14:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:34.561 05:14:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:34.561 05:14:31 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.561 05:14:31 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.561 05:14:31 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:34.561 05:14:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:34.561 05:14:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.561 05:14:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:34.561 05:14:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:34.561 05:14:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:34.561 05:14:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.561 05:14:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.561 05:14:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.561 05:14:31 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:16:34.561 05:14:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:34.561 05:14:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:34.561 05:14:31 -- common/autotest_common.sh@10 -- # set +x 00:16:39.844 05:14:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:39.844 05:14:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:39.844 05:14:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:39.844 05:14:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:39.844 05:14:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:39.844 05:14:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:39.844 05:14:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:39.844 05:14:35 -- nvmf/common.sh@294 -- # net_devs=() 00:16:39.844 05:14:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:39.844 05:14:35 -- nvmf/common.sh@295 -- # e810=() 00:16:39.844 05:14:35 -- nvmf/common.sh@295 -- # local -ga e810 00:16:39.844 05:14:35 -- nvmf/common.sh@296 -- # x722=() 00:16:39.844 05:14:35 -- nvmf/common.sh@296 -- # local -ga x722 00:16:39.844 05:14:35 -- nvmf/common.sh@297 -- # mlx=() 00:16:39.844 05:14:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:39.844 05:14:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.844 05:14:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:39.844 05:14:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:39.844 05:14:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:39.844 05:14:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:39.844 05:14:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:39.844 05:14:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.844 05:14:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:39.844 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:39.844 05:14:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:39.844 05:14:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.844 05:14:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:39.844 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:39.844 05:14:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:39.844 05:14:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:39.844 05:14:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:16:39.844 05:14:35 -- nvmf/common.sh@376 -- # modinfo irdma 00:16:39.844 05:14:35 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:16:39.844 05:14:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.844 05:14:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.844 05:14:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.844 05:14:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.844 05:14:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:39.844 Found net devices under 0000:af:00.0: cvl_0_0 00:16:39.844 05:14:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.844 05:14:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.844 05:14:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.844 05:14:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.844 05:14:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.844 05:14:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:39.844 Found net devices under 0000:af:00.1: cvl_0_1 00:16:39.844 05:14:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.844 05:14:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:39.844 05:14:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:39.844 05:14:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:39.844 05:14:35 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:39.844 05:14:35 -- nvmf/common.sh@57 -- # uname 00:16:39.844 05:14:35 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:39.844 05:14:35 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:39.844 05:14:35 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:39.844 05:14:35 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:39.844 05:14:35 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:39.844 05:14:35 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:39.844 05:14:35 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:39.844 05:14:35 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:39.844 05:14:35 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:39.844 05:14:35 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:39.844 05:14:35 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:39.844 05:14:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:39.844 05:14:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:39.844 05:14:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:39.844 05:14:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:39.844 05:14:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:39.844 05:14:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:39.844 05:14:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.844 05:14:35 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:39.844 05:14:35 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:16:39.844 05:14:35 -- nvmf/common.sh@104 -- # continue 2 00:16:39.845 05:14:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:39.845 05:14:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.845 05:14:35 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:39.845 05:14:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.845 05:14:35 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:39.845 05:14:35 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:16:39.845 05:14:35 -- nvmf/common.sh@104 -- # continue 2 00:16:39.845 05:14:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:39.845 05:14:35 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:16:39.845 05:14:35 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:39.845 05:14:35 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:39.845 05:14:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:39.845 05:14:35 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:16:39.845 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:39.845 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:39.845 altname enp175s0f0np0 00:16:39.845 altname ens801f0np0 00:16:39.845 inet 192.168.100.8/24 scope global cvl_0_0 00:16:39.845 valid_lft forever preferred_lft forever 00:16:39.845 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:39.845 valid_lft forever preferred_lft forever 00:16:39.845 05:14:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:39.845 05:14:35 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:16:39.845 05:14:35 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:39.845 05:14:35 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:39.845 05:14:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:39.845 05:14:35 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:16:39.845 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:39.845 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:39.845 altname enp175s0f1np1 00:16:39.845 altname ens801f1np1 00:16:39.845 inet 192.168.100.9/24 scope global cvl_0_1 00:16:39.845 valid_lft forever preferred_lft forever 00:16:39.845 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:39.845 valid_lft forever preferred_lft forever 00:16:39.845 05:14:35 -- nvmf/common.sh@410 -- # return 0 00:16:39.845 05:14:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:39.845 05:14:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:39.845 05:14:35 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:39.845 05:14:35 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:39.845 05:14:35 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:39.845 05:14:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:39.845 05:14:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:39.845 05:14:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:39.845 05:14:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:39.845 05:14:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:39.845 05:14:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:39.845 05:14:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.845 05:14:35 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:39.845 05:14:35 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:16:39.845 05:14:35 -- nvmf/common.sh@104 -- # continue 2 00:16:39.845 05:14:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:39.845 05:14:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.845 05:14:35 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:39.845 05:14:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.845 05:14:35 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:39.845 05:14:35 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:16:39.845 05:14:35 -- nvmf/common.sh@104 -- # continue 2 00:16:39.845 05:14:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:39.845 05:14:35 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:16:39.845 05:14:35 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:39.845 05:14:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:39.845 05:14:35 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:16:39.845 05:14:35 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:39.845 05:14:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:39.845 05:14:35 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:39.845 192.168.100.9' 00:16:39.845 05:14:35 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:39.845 192.168.100.9' 00:16:39.845 05:14:35 -- nvmf/common.sh@445 -- # head -n 1 00:16:39.845 05:14:35 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:39.845 05:14:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:39.845 192.168.100.9' 00:16:39.845 05:14:35 -- nvmf/common.sh@446 -- # tail -n +2 00:16:39.845 05:14:35 -- nvmf/common.sh@446 -- # head -n 1 00:16:39.845 05:14:35 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:39.845 05:14:35 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:39.845 05:14:35 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:39.845 05:14:35 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:39.845 05:14:35 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:39.845 05:14:35 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:39.845 05:14:35 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:39.845 05:14:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:39.845 05:14:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:39.845 05:14:35 -- common/autotest_common.sh@10 -- # set +x 00:16:39.845 05:14:35 -- nvmf/common.sh@469 -- # nvmfpid=266212 00:16:39.845 05:14:35 -- nvmf/common.sh@470 -- # waitforlisten 266212 00:16:39.845 05:14:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:39.845 05:14:35 -- common/autotest_common.sh@829 -- # '[' -z 266212 ']' 00:16:39.845 05:14:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.845 05:14:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.845 05:14:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.845 05:14:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.845 05:14:35 -- common/autotest_common.sh@10 -- # set +x 00:16:39.845 [2024-11-20 05:14:35.858356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:39.845 [2024-11-20 05:14:35.858398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.845 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.845 [2024-11-20 05:14:35.914288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.845 [2024-11-20 05:14:35.990476] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:39.845 [2024-11-20 05:14:35.990581] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.845 [2024-11-20 05:14:35.990589] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.845 [2024-11-20 05:14:35.990595] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.845 [2024-11-20 05:14:35.990640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.845 [2024-11-20 05:14:35.990743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.845 [2024-11-20 05:14:35.990832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.845 [2024-11-20 05:14:35.990833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.105 05:14:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.105 05:14:36 -- common/autotest_common.sh@862 -- # return 0 00:16:40.105 05:14:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:40.105 05:14:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.105 05:14:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.105 05:14:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.105 05:14:36 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:40.105 05:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.105 05:14:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.105 05:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.105 05:14:36 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:40.105 05:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.105 05:14:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.105 05:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.105 05:14:36 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:40.105 05:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.105 05:14:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.105 [2024-11-20 05:14:36.813673] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x194b130/0x194a770) succeed. 00:16:40.105 [2024-11-20 05:14:36.822405] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x194c4a0/0x194acf0) succeed. 00:16:40.105 [2024-11-20 05:14:36.822427] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:16:40.105 05:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.105 05:14:36 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.105 05:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.105 05:14:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.105 Malloc0 00:16:40.105 05:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.105 05:14:36 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:40.105 05:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.105 05:14:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.105 05:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.105 05:14:36 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.105 05:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.105 05:14:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.105 05:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.105 05:14:36 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:40.105 05:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.105 05:14:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.105 [2024-11-20 05:14:36.885355] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:40.105 05:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.105 05:14:36 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=266463 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@30 -- # READ_PID=266465 00:16:40.106 05:14:36 -- nvmf/common.sh@520 -- # config=() 00:16:40.106 05:14:36 -- nvmf/common.sh@520 -- # local subsystem config 00:16:40.106 05:14:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:40.106 05:14:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:40.106 { 00:16:40.106 "params": { 00:16:40.106 "name": "Nvme$subsystem", 00:16:40.106 "trtype": "$TEST_TRANSPORT", 00:16:40.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.106 "adrfam": "ipv4", 00:16:40.106 "trsvcid": "$NVMF_PORT", 00:16:40.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.106 "hdgst": ${hdgst:-false}, 00:16:40.106 "ddgst": ${ddgst:-false} 00:16:40.106 }, 00:16:40.106 "method": "bdev_nvme_attach_controller" 00:16:40.106 } 00:16:40.106 EOF 00:16:40.106 )") 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=266467 00:16:40.106 05:14:36 -- nvmf/common.sh@520 -- # config=() 00:16:40.106 05:14:36 -- nvmf/common.sh@520 -- # local subsystem config 00:16:40.106 05:14:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:40.106 05:14:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:40.106 { 00:16:40.106 "params": { 00:16:40.106 "name": "Nvme$subsystem", 00:16:40.106 "trtype": "$TEST_TRANSPORT", 00:16:40.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.106 "adrfam": "ipv4", 00:16:40.106 "trsvcid": "$NVMF_PORT", 00:16:40.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.106 "hdgst": ${hdgst:-false}, 00:16:40.106 "ddgst": ${ddgst:-false} 00:16:40.106 }, 00:16:40.106 "method": "bdev_nvme_attach_controller" 00:16:40.106 } 00:16:40.106 EOF 00:16:40.106 )") 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=266470 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@35 -- # sync 00:16:40.106 05:14:36 -- nvmf/common.sh@520 -- # config=() 00:16:40.106 05:14:36 -- nvmf/common.sh@542 -- # cat 00:16:40.106 05:14:36 -- nvmf/common.sh@520 -- # local subsystem config 00:16:40.106 05:14:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:40.106 05:14:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:40.106 { 00:16:40.106 "params": { 00:16:40.106 "name": "Nvme$subsystem", 00:16:40.106 "trtype": "$TEST_TRANSPORT", 00:16:40.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.106 "adrfam": "ipv4", 00:16:40.106 "trsvcid": "$NVMF_PORT", 00:16:40.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.106 "hdgst": ${hdgst:-false}, 00:16:40.106 "ddgst": ${ddgst:-false} 00:16:40.106 }, 00:16:40.106 "method": "bdev_nvme_attach_controller" 00:16:40.106 } 00:16:40.106 EOF 00:16:40.106 )") 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:40.106 05:14:36 -- nvmf/common.sh@520 -- # config=() 00:16:40.106 05:14:36 -- nvmf/common.sh@542 -- # cat 00:16:40.106 05:14:36 -- nvmf/common.sh@520 -- # local subsystem config 00:16:40.106 05:14:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:40.106 05:14:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:40.106 { 00:16:40.106 "params": { 00:16:40.106 "name": "Nvme$subsystem", 00:16:40.106 "trtype": "$TEST_TRANSPORT", 00:16:40.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.106 "adrfam": "ipv4", 00:16:40.106 "trsvcid": "$NVMF_PORT", 00:16:40.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.106 "hdgst": ${hdgst:-false}, 00:16:40.106 "ddgst": ${ddgst:-false} 00:16:40.106 }, 00:16:40.106 "method": "bdev_nvme_attach_controller" 00:16:40.106 } 00:16:40.106 EOF 00:16:40.106 )") 00:16:40.106 05:14:36 -- nvmf/common.sh@542 -- # cat 00:16:40.106 05:14:36 -- target/bdev_io_wait.sh@37 -- # wait 266463 00:16:40.106 05:14:36 -- nvmf/common.sh@542 -- # cat 00:16:40.106 05:14:36 -- nvmf/common.sh@544 -- # jq . 00:16:40.106 05:14:36 -- nvmf/common.sh@544 -- # jq . 00:16:40.106 05:14:36 -- nvmf/common.sh@544 -- # jq . 00:16:40.106 05:14:36 -- nvmf/common.sh@545 -- # IFS=, 00:16:40.106 05:14:36 -- nvmf/common.sh@544 -- # jq . 00:16:40.106 05:14:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:40.106 "params": { 00:16:40.106 "name": "Nvme1", 00:16:40.106 "trtype": "rdma", 00:16:40.106 "traddr": "192.168.100.8", 00:16:40.106 "adrfam": "ipv4", 00:16:40.106 "trsvcid": "4420", 00:16:40.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:40.106 "hdgst": false, 00:16:40.106 "ddgst": false 00:16:40.106 }, 00:16:40.106 "method": "bdev_nvme_attach_controller" 00:16:40.106 }' 00:16:40.106 05:14:36 -- nvmf/common.sh@545 -- # IFS=, 00:16:40.106 05:14:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:40.106 "params": { 00:16:40.106 "name": "Nvme1", 00:16:40.106 "trtype": "rdma", 00:16:40.106 "traddr": "192.168.100.8", 00:16:40.106 "adrfam": "ipv4", 00:16:40.106 "trsvcid": "4420", 00:16:40.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:40.106 "hdgst": false, 00:16:40.106 "ddgst": false 00:16:40.106 }, 00:16:40.106 "method": "bdev_nvme_attach_controller" 00:16:40.106 }' 00:16:40.106 05:14:36 -- nvmf/common.sh@545 -- # IFS=, 00:16:40.106 05:14:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:40.106 "params": { 00:16:40.106 "name": "Nvme1", 00:16:40.106 "trtype": "rdma", 00:16:40.106 "traddr": "192.168.100.8", 00:16:40.106 "adrfam": "ipv4", 00:16:40.106 "trsvcid": "4420", 00:16:40.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:40.106 "hdgst": false, 00:16:40.106 "ddgst": false 00:16:40.106 }, 00:16:40.106 "method": "bdev_nvme_attach_controller" 00:16:40.106 }' 00:16:40.106 05:14:36 -- nvmf/common.sh@545 -- # IFS=, 00:16:40.106 05:14:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:40.106 "params": { 00:16:40.106 "name": "Nvme1", 00:16:40.106 "trtype": "rdma", 00:16:40.106 "traddr": "192.168.100.8", 00:16:40.106 "adrfam": "ipv4", 00:16:40.106 "trsvcid": "4420", 00:16:40.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:40.106 "hdgst": false, 00:16:40.106 "ddgst": false 00:16:40.106 }, 00:16:40.106 "method": "bdev_nvme_attach_controller" 00:16:40.106 }' 00:16:40.366 [2024-11-20 05:14:36.933821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:40.366 [2024-11-20 05:14:36.933859] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:40.366 [2024-11-20 05:14:36.934380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:40.366 [2024-11-20 05:14:36.934429] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:40.366 [2024-11-20 05:14:36.934446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:40.366 [2024-11-20 05:14:36.934446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:40.366 [2024-11-20 05:14:36.934489] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 05:14:36.934489] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:40.366 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:40.366 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.366 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.366 [2024-11-20 05:14:37.120477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.366 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.625 [2024-11-20 05:14:37.200176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:40.625 [2024-11-20 05:14:37.213496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.625 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.625 [2024-11-20 05:14:37.292197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.625 [2024-11-20 05:14:37.314828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.625 [2024-11-20 05:14:37.375866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.625 [2024-11-20 05:14:37.409537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:40.625 [2024-11-20 05:14:37.451481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:40.885 Running I/O for 1 seconds... 00:16:40.885 Running I/O for 1 seconds... 00:16:40.885 Running I/O for 1 seconds... 00:16:40.885 Running I/O for 1 seconds... 00:16:41.825 00:16:41.825 Latency(us) 00:16:41.825 [2024-11-20T04:14:38.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.825 [2024-11-20T04:14:38.653Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:41.825 Nvme1n1 : 1.00 257366.89 1005.34 0.00 0.00 495.25 197.00 3089.55 00:16:41.825 [2024-11-20T04:14:38.653Z] =================================================================================================================== 00:16:41.825 [2024-11-20T04:14:38.653Z] Total : 257366.89 1005.34 0.00 0.00 495.25 197.00 3089.55 00:16:41.825 00:16:41.825 Latency(us) 00:16:41.825 [2024-11-20T04:14:38.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.825 [2024-11-20T04:14:38.653Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:41.825 Nvme1n1 : 1.01 18055.14 70.53 0.00 0.00 7067.17 4119.41 18350.08 00:16:41.825 [2024-11-20T04:14:38.653Z] =================================================================================================================== 00:16:41.825 [2024-11-20T04:14:38.653Z] Total : 18055.14 70.53 0.00 0.00 7067.17 4119.41 18350.08 00:16:41.825 00:16:41.825 Latency(us) 00:16:41.825 [2024-11-20T04:14:38.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.825 [2024-11-20T04:14:38.653Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:41.825 Nvme1n1 : 1.00 17881.58 69.85 0.00 0.00 7138.29 4649.94 17101.78 00:16:41.825 [2024-11-20T04:14:38.653Z] =================================================================================================================== 00:16:41.825 [2024-11-20T04:14:38.653Z] Total : 17881.58 69.85 0.00 0.00 7138.29 4649.94 17101.78 00:16:41.825 00:16:41.825 Latency(us) 00:16:41.825 [2024-11-20T04:14:38.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.825 [2024-11-20T04:14:38.653Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:41.825 Nvme1n1 : 1.00 15467.90 60.42 0.00 0.00 8255.21 3963.37 17226.61 00:16:41.825 [2024-11-20T04:14:38.653Z] =================================================================================================================== 00:16:41.825 [2024-11-20T04:14:38.653Z] Total : 15467.90 60.42 0.00 0.00 8255.21 3963.37 17226.61 00:16:42.085 05:14:38 -- target/bdev_io_wait.sh@38 -- # wait 266465 00:16:42.085 05:14:38 -- target/bdev_io_wait.sh@39 -- # wait 266467 00:16:42.085 05:14:38 -- target/bdev_io_wait.sh@40 -- # wait 266470 00:16:42.085 05:14:38 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.085 05:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.085 05:14:38 -- common/autotest_common.sh@10 -- # set +x 00:16:42.085 05:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.085 05:14:38 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:42.085 05:14:38 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:42.085 05:14:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.085 05:14:38 -- nvmf/common.sh@116 -- # sync 00:16:42.085 05:14:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:42.085 05:14:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:42.085 05:14:38 -- nvmf/common.sh@119 -- # set +e 00:16:42.085 05:14:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.085 05:14:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:42.085 rmmod nvme_rdma 00:16:42.345 rmmod nvme_fabrics 00:16:42.345 05:14:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.345 05:14:38 -- nvmf/common.sh@123 -- # set -e 00:16:42.345 05:14:38 -- nvmf/common.sh@124 -- # return 0 00:16:42.345 05:14:38 -- nvmf/common.sh@477 -- # '[' -n 266212 ']' 00:16:42.345 05:14:38 -- nvmf/common.sh@478 -- # killprocess 266212 00:16:42.345 05:14:38 -- common/autotest_common.sh@936 -- # '[' -z 266212 ']' 00:16:42.345 05:14:38 -- common/autotest_common.sh@940 -- # kill -0 266212 00:16:42.345 05:14:38 -- common/autotest_common.sh@941 -- # uname 00:16:42.345 05:14:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.345 05:14:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 266212 00:16:42.345 05:14:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.345 05:14:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.345 05:14:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 266212' 00:16:42.345 killing process with pid 266212 00:16:42.345 05:14:38 -- common/autotest_common.sh@955 -- # kill 266212 00:16:42.345 05:14:38 -- common/autotest_common.sh@960 -- # wait 266212 00:16:42.604 05:14:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.604 05:14:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:42.604 00:16:42.604 real 0m8.200s 00:16:42.604 user 0m19.877s 00:16:42.604 sys 0m4.753s 00:16:42.604 05:14:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:42.604 05:14:39 -- common/autotest_common.sh@10 -- # set +x 00:16:42.604 ************************************ 00:16:42.604 END TEST nvmf_bdev_io_wait 00:16:42.604 ************************************ 00:16:42.604 05:14:39 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:16:42.604 05:14:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:42.604 05:14:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.604 05:14:39 -- common/autotest_common.sh@10 -- # set +x 00:16:42.604 ************************************ 00:16:42.604 START TEST nvmf_queue_depth 00:16:42.604 ************************************ 00:16:42.604 05:14:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:16:42.604 * Looking for test storage... 00:16:42.604 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:42.604 05:14:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:42.604 05:14:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:42.604 05:14:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:42.604 05:14:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:42.604 05:14:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:42.604 05:14:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:42.604 05:14:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:42.604 05:14:39 -- scripts/common.sh@335 -- # IFS=.-: 00:16:42.604 05:14:39 -- scripts/common.sh@335 -- # read -ra ver1 00:16:42.604 05:14:39 -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.605 05:14:39 -- scripts/common.sh@336 -- # read -ra ver2 00:16:42.605 05:14:39 -- scripts/common.sh@337 -- # local 'op=<' 00:16:42.605 05:14:39 -- scripts/common.sh@339 -- # ver1_l=2 00:16:42.605 05:14:39 -- scripts/common.sh@340 -- # ver2_l=1 00:16:42.605 05:14:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:42.605 05:14:39 -- scripts/common.sh@343 -- # case "$op" in 00:16:42.605 05:14:39 -- scripts/common.sh@344 -- # : 1 00:16:42.605 05:14:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:42.605 05:14:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.605 05:14:39 -- scripts/common.sh@364 -- # decimal 1 00:16:42.605 05:14:39 -- scripts/common.sh@352 -- # local d=1 00:16:42.605 05:14:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.605 05:14:39 -- scripts/common.sh@354 -- # echo 1 00:16:42.605 05:14:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:42.605 05:14:39 -- scripts/common.sh@365 -- # decimal 2 00:16:42.605 05:14:39 -- scripts/common.sh@352 -- # local d=2 00:16:42.605 05:14:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.605 05:14:39 -- scripts/common.sh@354 -- # echo 2 00:16:42.605 05:14:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:42.605 05:14:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:42.605 05:14:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:42.605 05:14:39 -- scripts/common.sh@367 -- # return 0 00:16:42.605 05:14:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.605 05:14:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:42.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.605 --rc genhtml_branch_coverage=1 00:16:42.605 --rc genhtml_function_coverage=1 00:16:42.605 --rc genhtml_legend=1 00:16:42.605 --rc geninfo_all_blocks=1 00:16:42.605 --rc geninfo_unexecuted_blocks=1 00:16:42.605 00:16:42.605 ' 00:16:42.605 05:14:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:42.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.605 --rc genhtml_branch_coverage=1 00:16:42.605 --rc genhtml_function_coverage=1 00:16:42.605 --rc genhtml_legend=1 00:16:42.605 --rc geninfo_all_blocks=1 00:16:42.605 --rc geninfo_unexecuted_blocks=1 00:16:42.605 00:16:42.605 ' 00:16:42.605 05:14:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:42.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.605 --rc genhtml_branch_coverage=1 00:16:42.605 --rc genhtml_function_coverage=1 00:16:42.605 --rc genhtml_legend=1 00:16:42.605 --rc geninfo_all_blocks=1 00:16:42.605 --rc geninfo_unexecuted_blocks=1 00:16:42.605 00:16:42.605 ' 00:16:42.605 05:14:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:42.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.605 --rc genhtml_branch_coverage=1 00:16:42.605 --rc genhtml_function_coverage=1 00:16:42.605 --rc genhtml_legend=1 00:16:42.605 --rc geninfo_all_blocks=1 00:16:42.605 --rc geninfo_unexecuted_blocks=1 00:16:42.605 00:16:42.605 ' 00:16:42.605 05:14:39 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.605 05:14:39 -- nvmf/common.sh@7 -- # uname -s 00:16:42.605 05:14:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.605 05:14:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.605 05:14:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.605 05:14:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.605 05:14:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.605 05:14:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.605 05:14:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.605 05:14:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.605 05:14:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.605 05:14:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.605 05:14:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:42.605 05:14:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:42.605 05:14:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.605 05:14:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.605 05:14:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:42.605 05:14:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:42.605 05:14:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.605 05:14:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.605 05:14:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.605 05:14:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.605 05:14:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.605 05:14:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.605 05:14:39 -- paths/export.sh@5 -- # export PATH 00:16:42.605 05:14:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.605 05:14:39 -- nvmf/common.sh@46 -- # : 0 00:16:42.605 05:14:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:42.605 05:14:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:42.605 05:14:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:42.605 05:14:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.605 05:14:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.605 05:14:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:42.605 05:14:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:42.605 05:14:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:42.605 05:14:39 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:42.605 05:14:39 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:42.605 05:14:39 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:42.605 05:14:39 -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:42.605 05:14:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:42.605 05:14:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.605 05:14:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:42.605 05:14:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:42.605 05:14:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:42.605 05:14:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.605 05:14:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.605 05:14:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.605 05:14:39 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:16:42.605 05:14:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:42.605 05:14:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:42.605 05:14:39 -- common/autotest_common.sh@10 -- # set +x 00:16:47.887 05:14:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:47.887 05:14:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:47.887 05:14:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:47.887 05:14:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:47.887 05:14:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:47.887 05:14:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:47.887 05:14:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:47.887 05:14:44 -- nvmf/common.sh@294 -- # net_devs=() 00:16:47.887 05:14:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:47.887 05:14:44 -- nvmf/common.sh@295 -- # e810=() 00:16:47.887 05:14:44 -- nvmf/common.sh@295 -- # local -ga e810 00:16:47.887 05:14:44 -- nvmf/common.sh@296 -- # x722=() 00:16:47.887 05:14:44 -- nvmf/common.sh@296 -- # local -ga x722 00:16:47.887 05:14:44 -- nvmf/common.sh@297 -- # mlx=() 00:16:47.887 05:14:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:47.887 05:14:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.887 05:14:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:47.887 05:14:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:47.887 05:14:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:47.887 05:14:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:47.887 05:14:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:47.887 05:14:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:47.887 05:14:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:47.887 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:47.887 05:14:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.887 05:14:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:47.887 05:14:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:47.887 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:47.887 05:14:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.887 05:14:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:47.887 05:14:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:16:47.887 05:14:44 -- nvmf/common.sh@376 -- # modinfo irdma 00:16:47.887 05:14:44 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:16:47.887 05:14:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:47.887 05:14:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.887 05:14:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:47.887 05:14:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.887 05:14:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:47.887 Found net devices under 0000:af:00.0: cvl_0_0 00:16:47.887 05:14:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.887 05:14:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:47.887 05:14:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.887 05:14:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:47.887 05:14:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.887 05:14:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:47.887 Found net devices under 0000:af:00.1: cvl_0_1 00:16:47.887 05:14:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.887 05:14:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:47.887 05:14:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:47.887 05:14:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:47.887 05:14:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:47.887 05:14:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:47.887 05:14:44 -- nvmf/common.sh@57 -- # uname 00:16:47.887 05:14:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:47.887 05:14:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:47.887 05:14:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:47.887 05:14:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:47.887 05:14:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:47.887 05:14:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:48.148 05:14:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:48.148 05:14:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:48.148 05:14:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:48.148 05:14:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:48.148 05:14:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:48.148 05:14:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:48.148 05:14:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:48.148 05:14:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:48.148 05:14:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:48.148 05:14:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:48.148 05:14:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:16:48.148 05:14:44 -- nvmf/common.sh@104 -- # continue 2 00:16:48.148 05:14:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:16:48.148 05:14:44 -- nvmf/common.sh@104 -- # continue 2 00:16:48.148 05:14:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:48.148 05:14:44 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:16:48.148 05:14:44 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.148 05:14:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:48.148 05:14:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:16:48.148 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:48.148 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:48.148 altname enp175s0f0np0 00:16:48.148 altname ens801f0np0 00:16:48.148 inet 192.168.100.8/24 scope global cvl_0_0 00:16:48.148 valid_lft forever preferred_lft forever 00:16:48.148 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:48.148 valid_lft forever preferred_lft forever 00:16:48.148 05:14:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:48.148 05:14:44 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:16:48.148 05:14:44 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.148 05:14:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:48.148 05:14:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:16:48.148 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:48.148 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:48.148 altname enp175s0f1np1 00:16:48.148 altname ens801f1np1 00:16:48.148 inet 192.168.100.9/24 scope global cvl_0_1 00:16:48.148 valid_lft forever preferred_lft forever 00:16:48.148 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:48.148 valid_lft forever preferred_lft forever 00:16:48.148 05:14:44 -- nvmf/common.sh@410 -- # return 0 00:16:48.148 05:14:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.148 05:14:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:48.148 05:14:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:48.148 05:14:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:48.148 05:14:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:48.148 05:14:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:48.148 05:14:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:48.148 05:14:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:48.148 05:14:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:48.148 05:14:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:16:48.148 05:14:44 -- nvmf/common.sh@104 -- # continue 2 00:16:48.148 05:14:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:48.148 05:14:44 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:48.148 05:14:44 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:16:48.148 05:14:44 -- nvmf/common.sh@104 -- # continue 2 00:16:48.148 05:14:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:48.148 05:14:44 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:16:48.148 05:14:44 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.148 05:14:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:48.148 05:14:44 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:16:48.148 05:14:44 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:48.148 05:14:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:48.148 05:14:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:48.148 192.168.100.9' 00:16:48.148 05:14:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:48.148 192.168.100.9' 00:16:48.148 05:14:44 -- nvmf/common.sh@445 -- # head -n 1 00:16:48.148 05:14:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:48.148 05:14:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:48.148 192.168.100.9' 00:16:48.148 05:14:44 -- nvmf/common.sh@446 -- # tail -n +2 00:16:48.148 05:14:44 -- nvmf/common.sh@446 -- # head -n 1 00:16:48.148 05:14:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:48.148 05:14:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:48.148 05:14:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:48.148 05:14:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:48.148 05:14:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:48.148 05:14:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:48.148 05:14:44 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:48.148 05:14:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.148 05:14:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.148 05:14:44 -- common/autotest_common.sh@10 -- # set +x 00:16:48.148 05:14:44 -- nvmf/common.sh@469 -- # nvmfpid=269993 00:16:48.148 05:14:44 -- nvmf/common.sh@470 -- # waitforlisten 269993 00:16:48.148 05:14:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:48.148 05:14:44 -- common/autotest_common.sh@829 -- # '[' -z 269993 ']' 00:16:48.148 05:14:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.148 05:14:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.149 05:14:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.149 05:14:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.149 05:14:44 -- common/autotest_common.sh@10 -- # set +x 00:16:48.149 [2024-11-20 05:14:44.929422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.149 [2024-11-20 05:14:44.929463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.149 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.408 [2024-11-20 05:14:44.986035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.408 [2024-11-20 05:14:45.059208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.408 [2024-11-20 05:14:45.059316] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.408 [2024-11-20 05:14:45.059324] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.408 [2024-11-20 05:14:45.059330] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.408 [2024-11-20 05:14:45.059346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.976 05:14:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.976 05:14:45 -- common/autotest_common.sh@862 -- # return 0 00:16:48.976 05:14:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:48.976 05:14:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.976 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:16:48.976 05:14:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.976 05:14:45 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:48.976 05:14:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.976 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:16:48.976 [2024-11-20 05:14:45.785993] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x7d02a0/0x7cf8e0) succeed. 00:16:48.976 [2024-11-20 05:14:45.794269] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x7d1550/0x7cfe60) succeed. 00:16:48.976 [2024-11-20 05:14:45.794291] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:16:48.976 05:14:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.976 05:14:45 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:48.976 05:14:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.976 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:16:49.237 Malloc0 00:16:49.237 05:14:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.237 05:14:45 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:49.237 05:14:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.237 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:16:49.237 05:14:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.237 05:14:45 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.237 05:14:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.237 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:16:49.237 05:14:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.237 05:14:45 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:49.237 05:14:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.237 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:16:49.237 [2024-11-20 05:14:45.845951] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:49.237 05:14:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.237 05:14:45 -- target/queue_depth.sh@30 -- # bdevperf_pid=270032 00:16:49.237 05:14:45 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.237 05:14:45 -- target/queue_depth.sh@33 -- # waitforlisten 270032 /var/tmp/bdevperf.sock 00:16:49.237 05:14:45 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:49.237 05:14:45 -- common/autotest_common.sh@829 -- # '[' -z 270032 ']' 00:16:49.237 05:14:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.237 05:14:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.237 05:14:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.237 05:14:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.237 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:16:49.237 [2024-11-20 05:14:45.879430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:49.237 [2024-11-20 05:14:45.879467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid270032 ] 00:16:49.237 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.237 [2024-11-20 05:14:45.935107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.237 [2024-11-20 05:14:46.010148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.174 05:14:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.174 05:14:46 -- common/autotest_common.sh@862 -- # return 0 00:16:50.174 05:14:46 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:50.174 05:14:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.174 05:14:46 -- common/autotest_common.sh@10 -- # set +x 00:16:50.174 NVMe0n1 00:16:50.174 05:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.174 05:14:46 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:50.174 Running I/O for 10 seconds... 00:17:00.162 00:17:00.162 Latency(us) 00:17:00.162 [2024-11-20T04:14:56.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.162 [2024-11-20T04:14:56.990Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:00.162 Verification LBA range: start 0x0 length 0x4000 00:17:00.162 NVMe0n1 : 10.03 28921.51 112.97 0.00 0.00 35326.15 6990.51 34203.55 00:17:00.162 [2024-11-20T04:14:56.990Z] =================================================================================================================== 00:17:00.162 [2024-11-20T04:14:56.990Z] Total : 28921.51 112.97 0.00 0.00 35326.15 6990.51 34203.55 00:17:00.162 0 00:17:00.162 05:14:56 -- target/queue_depth.sh@39 -- # killprocess 270032 00:17:00.162 05:14:56 -- common/autotest_common.sh@936 -- # '[' -z 270032 ']' 00:17:00.162 05:14:56 -- common/autotest_common.sh@940 -- # kill -0 270032 00:17:00.162 05:14:56 -- common/autotest_common.sh@941 -- # uname 00:17:00.162 05:14:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.162 05:14:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 270032 00:17:00.162 05:14:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:00.162 05:14:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:00.162 05:14:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 270032' 00:17:00.162 killing process with pid 270032 00:17:00.162 05:14:56 -- common/autotest_common.sh@955 -- # kill 270032 00:17:00.162 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.162 00:17:00.162 Latency(us) 00:17:00.162 [2024-11-20T04:14:56.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.162 [2024-11-20T04:14:56.990Z] =================================================================================================================== 00:17:00.162 [2024-11-20T04:14:56.990Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.162 05:14:56 -- common/autotest_common.sh@960 -- # wait 270032 00:17:00.422 05:14:57 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:00.422 05:14:57 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:00.422 05:14:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:00.422 05:14:57 -- nvmf/common.sh@116 -- # sync 00:17:00.422 05:14:57 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:00.422 05:14:57 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:00.422 05:14:57 -- nvmf/common.sh@119 -- # set +e 00:17:00.422 05:14:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:00.422 05:14:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:00.422 rmmod nvme_rdma 00:17:00.422 rmmod nvme_fabrics 00:17:00.422 05:14:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:00.422 05:14:57 -- nvmf/common.sh@123 -- # set -e 00:17:00.422 05:14:57 -- nvmf/common.sh@124 -- # return 0 00:17:00.422 05:14:57 -- nvmf/common.sh@477 -- # '[' -n 269993 ']' 00:17:00.422 05:14:57 -- nvmf/common.sh@478 -- # killprocess 269993 00:17:00.422 05:14:57 -- common/autotest_common.sh@936 -- # '[' -z 269993 ']' 00:17:00.422 05:14:57 -- common/autotest_common.sh@940 -- # kill -0 269993 00:17:00.422 05:14:57 -- common/autotest_common.sh@941 -- # uname 00:17:00.422 05:14:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.422 05:14:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 269993 00:17:00.682 05:14:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:00.682 05:14:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:00.682 05:14:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 269993' 00:17:00.682 killing process with pid 269993 00:17:00.682 05:14:57 -- common/autotest_common.sh@955 -- # kill 269993 00:17:00.682 05:14:57 -- common/autotest_common.sh@960 -- # wait 269993 00:17:00.943 05:14:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:00.943 05:14:57 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:00.943 00:17:00.943 real 0m18.281s 00:17:00.943 user 0m25.897s 00:17:00.943 sys 0m4.667s 00:17:00.943 05:14:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:00.943 05:14:57 -- common/autotest_common.sh@10 -- # set +x 00:17:00.943 ************************************ 00:17:00.943 END TEST nvmf_queue_depth 00:17:00.943 ************************************ 00:17:00.943 05:14:57 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:17:00.943 05:14:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:00.943 05:14:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.943 05:14:57 -- common/autotest_common.sh@10 -- # set +x 00:17:00.943 ************************************ 00:17:00.943 START TEST nvmf_multipath 00:17:00.943 ************************************ 00:17:00.943 05:14:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:17:00.943 * Looking for test storage... 00:17:00.943 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:00.943 05:14:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:00.943 05:14:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:00.943 05:14:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:00.943 05:14:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:00.943 05:14:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:00.943 05:14:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:00.943 05:14:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:00.943 05:14:57 -- scripts/common.sh@335 -- # IFS=.-: 00:17:00.943 05:14:57 -- scripts/common.sh@335 -- # read -ra ver1 00:17:00.943 05:14:57 -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.943 05:14:57 -- scripts/common.sh@336 -- # read -ra ver2 00:17:00.943 05:14:57 -- scripts/common.sh@337 -- # local 'op=<' 00:17:00.943 05:14:57 -- scripts/common.sh@339 -- # ver1_l=2 00:17:00.943 05:14:57 -- scripts/common.sh@340 -- # ver2_l=1 00:17:00.943 05:14:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:00.943 05:14:57 -- scripts/common.sh@343 -- # case "$op" in 00:17:00.943 05:14:57 -- scripts/common.sh@344 -- # : 1 00:17:00.943 05:14:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:00.943 05:14:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.943 05:14:57 -- scripts/common.sh@364 -- # decimal 1 00:17:00.943 05:14:57 -- scripts/common.sh@352 -- # local d=1 00:17:00.943 05:14:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.943 05:14:57 -- scripts/common.sh@354 -- # echo 1 00:17:00.943 05:14:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:00.943 05:14:57 -- scripts/common.sh@365 -- # decimal 2 00:17:00.943 05:14:57 -- scripts/common.sh@352 -- # local d=2 00:17:00.943 05:14:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.943 05:14:57 -- scripts/common.sh@354 -- # echo 2 00:17:00.943 05:14:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:00.943 05:14:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:00.943 05:14:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:00.943 05:14:57 -- scripts/common.sh@367 -- # return 0 00:17:00.943 05:14:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.943 05:14:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:00.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.943 --rc genhtml_branch_coverage=1 00:17:00.943 --rc genhtml_function_coverage=1 00:17:00.943 --rc genhtml_legend=1 00:17:00.943 --rc geninfo_all_blocks=1 00:17:00.943 --rc geninfo_unexecuted_blocks=1 00:17:00.943 00:17:00.943 ' 00:17:00.943 05:14:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:00.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.943 --rc genhtml_branch_coverage=1 00:17:00.943 --rc genhtml_function_coverage=1 00:17:00.943 --rc genhtml_legend=1 00:17:00.943 --rc geninfo_all_blocks=1 00:17:00.943 --rc geninfo_unexecuted_blocks=1 00:17:00.943 00:17:00.943 ' 00:17:00.943 05:14:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:00.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.943 --rc genhtml_branch_coverage=1 00:17:00.943 --rc genhtml_function_coverage=1 00:17:00.943 --rc genhtml_legend=1 00:17:00.943 --rc geninfo_all_blocks=1 00:17:00.943 --rc geninfo_unexecuted_blocks=1 00:17:00.943 00:17:00.943 ' 00:17:00.943 05:14:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:00.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.943 --rc genhtml_branch_coverage=1 00:17:00.943 --rc genhtml_function_coverage=1 00:17:00.943 --rc genhtml_legend=1 00:17:00.943 --rc geninfo_all_blocks=1 00:17:00.943 --rc geninfo_unexecuted_blocks=1 00:17:00.943 00:17:00.943 ' 00:17:00.943 05:14:57 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.943 05:14:57 -- nvmf/common.sh@7 -- # uname -s 00:17:00.943 05:14:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.943 05:14:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.943 05:14:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.943 05:14:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.943 05:14:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.943 05:14:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.943 05:14:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.943 05:14:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.943 05:14:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.943 05:14:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.943 05:14:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:00.943 05:14:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:00.943 05:14:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.943 05:14:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.943 05:14:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:00.943 05:14:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:00.943 05:14:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.943 05:14:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.943 05:14:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.944 05:14:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.944 05:14:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.944 05:14:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.944 05:14:57 -- paths/export.sh@5 -- # export PATH 00:17:00.944 05:14:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.944 05:14:57 -- nvmf/common.sh@46 -- # : 0 00:17:00.944 05:14:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:00.944 05:14:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:00.944 05:14:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:00.944 05:14:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.944 05:14:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.944 05:14:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:00.944 05:14:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:00.944 05:14:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:00.944 05:14:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.944 05:14:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.944 05:14:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:00.944 05:14:57 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:17:00.944 05:14:57 -- target/multipath.sh@43 -- # nvmftestinit 00:17:00.944 05:14:57 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:00.944 05:14:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.944 05:14:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:00.944 05:14:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:00.944 05:14:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:00.944 05:14:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.944 05:14:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.944 05:14:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.944 05:14:57 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:17:00.944 05:14:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:00.944 05:14:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:00.944 05:14:57 -- common/autotest_common.sh@10 -- # set +x 00:17:06.223 05:15:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:06.223 05:15:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:06.223 05:15:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:06.223 05:15:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:06.223 05:15:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:06.223 05:15:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:06.223 05:15:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:06.223 05:15:02 -- nvmf/common.sh@294 -- # net_devs=() 00:17:06.223 05:15:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:06.223 05:15:02 -- nvmf/common.sh@295 -- # e810=() 00:17:06.223 05:15:02 -- nvmf/common.sh@295 -- # local -ga e810 00:17:06.223 05:15:02 -- nvmf/common.sh@296 -- # x722=() 00:17:06.223 05:15:02 -- nvmf/common.sh@296 -- # local -ga x722 00:17:06.223 05:15:02 -- nvmf/common.sh@297 -- # mlx=() 00:17:06.223 05:15:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:06.223 05:15:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.223 05:15:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:06.223 05:15:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:06.223 05:15:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:06.223 05:15:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:06.223 05:15:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:06.223 05:15:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:06.223 05:15:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:06.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:06.223 05:15:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.223 05:15:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:06.223 05:15:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:06.223 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:06.223 05:15:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.223 05:15:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:06.223 05:15:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:17:06.223 05:15:02 -- nvmf/common.sh@376 -- # modinfo irdma 00:17:06.223 05:15:02 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:17:06.223 05:15:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:06.223 05:15:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.223 05:15:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:06.223 05:15:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.223 05:15:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:06.223 Found net devices under 0000:af:00.0: cvl_0_0 00:17:06.223 05:15:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.223 05:15:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:06.223 05:15:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.223 05:15:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:06.223 05:15:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.223 05:15:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:06.223 Found net devices under 0000:af:00.1: cvl_0_1 00:17:06.223 05:15:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.223 05:15:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:06.223 05:15:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:06.223 05:15:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:06.223 05:15:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:06.223 05:15:02 -- nvmf/common.sh@57 -- # uname 00:17:06.223 05:15:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:06.223 05:15:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:06.223 05:15:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:06.223 05:15:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:06.223 05:15:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:06.223 05:15:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:06.223 05:15:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:06.223 05:15:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:06.223 05:15:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:06.223 05:15:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:06.223 05:15:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:06.223 05:15:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.223 05:15:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:06.223 05:15:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:06.223 05:15:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.223 05:15:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:06.223 05:15:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:06.223 05:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.223 05:15:02 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:06.223 05:15:02 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:06.223 05:15:02 -- nvmf/common.sh@104 -- # continue 2 00:17:06.223 05:15:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:06.223 05:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.224 05:15:02 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:06.224 05:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.224 05:15:02 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:06.224 05:15:02 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:06.224 05:15:02 -- nvmf/common.sh@104 -- # continue 2 00:17:06.224 05:15:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:06.224 05:15:02 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:17:06.224 05:15:02 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:06.224 05:15:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:06.224 05:15:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:06.224 05:15:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:06.224 05:15:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:06.224 05:15:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:06.224 05:15:02 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:17:06.224 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:06.224 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:06.224 altname enp175s0f0np0 00:17:06.224 altname ens801f0np0 00:17:06.224 inet 192.168.100.8/24 scope global cvl_0_0 00:17:06.224 valid_lft forever preferred_lft forever 00:17:06.224 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:06.224 valid_lft forever preferred_lft forever 00:17:06.224 05:15:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:06.224 05:15:03 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:17:06.224 05:15:03 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:06.224 05:15:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:06.224 05:15:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:06.224 05:15:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:06.224 05:15:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:06.224 05:15:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:06.224 05:15:03 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:17:06.224 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:06.224 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:06.224 altname enp175s0f1np1 00:17:06.224 altname ens801f1np1 00:17:06.224 inet 192.168.100.9/24 scope global cvl_0_1 00:17:06.224 valid_lft forever preferred_lft forever 00:17:06.224 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:06.224 valid_lft forever preferred_lft forever 00:17:06.224 05:15:03 -- nvmf/common.sh@410 -- # return 0 00:17:06.224 05:15:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:06.224 05:15:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:06.224 05:15:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:06.224 05:15:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:06.224 05:15:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:06.224 05:15:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.224 05:15:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:06.224 05:15:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:06.224 05:15:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.224 05:15:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:06.224 05:15:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:06.224 05:15:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.224 05:15:03 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:06.224 05:15:03 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:06.224 05:15:03 -- nvmf/common.sh@104 -- # continue 2 00:17:06.224 05:15:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:06.224 05:15:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.224 05:15:03 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:06.224 05:15:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.224 05:15:03 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:06.224 05:15:03 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:06.224 05:15:03 -- nvmf/common.sh@104 -- # continue 2 00:17:06.484 05:15:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:06.484 05:15:03 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:17:06.484 05:15:03 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:06.484 05:15:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:06.484 05:15:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:06.484 05:15:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:06.484 05:15:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:06.484 05:15:03 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:17:06.484 05:15:03 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:06.484 05:15:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:06.484 05:15:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:06.484 05:15:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:06.484 05:15:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:06.484 192.168.100.9' 00:17:06.484 05:15:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:06.484 192.168.100.9' 00:17:06.484 05:15:03 -- nvmf/common.sh@445 -- # head -n 1 00:17:06.484 05:15:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:06.484 05:15:03 -- nvmf/common.sh@446 -- # head -n 1 00:17:06.484 05:15:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:06.484 192.168.100.9' 00:17:06.484 05:15:03 -- nvmf/common.sh@446 -- # tail -n +2 00:17:06.484 05:15:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:06.484 05:15:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:06.484 05:15:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:06.484 05:15:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:06.484 05:15:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:06.484 05:15:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:06.485 05:15:03 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:17:06.485 05:15:03 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:17:06.485 05:15:03 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:17:06.485 run this test only with TCP transport for now 00:17:06.485 05:15:03 -- target/multipath.sh@53 -- # nvmftestfini 00:17:06.485 05:15:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.485 05:15:03 -- nvmf/common.sh@116 -- # sync 00:17:06.485 05:15:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:06.485 05:15:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:06.485 05:15:03 -- nvmf/common.sh@119 -- # set +e 00:17:06.485 05:15:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.485 05:15:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:06.485 rmmod nvme_rdma 00:17:06.485 rmmod nvme_fabrics 00:17:06.485 05:15:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.485 05:15:03 -- nvmf/common.sh@123 -- # set -e 00:17:06.485 05:15:03 -- nvmf/common.sh@124 -- # return 0 00:17:06.485 05:15:03 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:06.485 05:15:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:06.485 05:15:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:06.485 05:15:03 -- target/multipath.sh@54 -- # exit 0 00:17:06.485 05:15:03 -- target/multipath.sh@1 -- # nvmftestfini 00:17:06.485 05:15:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.485 05:15:03 -- nvmf/common.sh@116 -- # sync 00:17:06.485 05:15:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:06.485 05:15:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:06.485 05:15:03 -- nvmf/common.sh@119 -- # set +e 00:17:06.485 05:15:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.485 05:15:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:06.485 05:15:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.485 05:15:03 -- nvmf/common.sh@123 -- # set -e 00:17:06.485 05:15:03 -- nvmf/common.sh@124 -- # return 0 00:17:06.485 05:15:03 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:06.485 05:15:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:06.485 05:15:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:06.485 00:17:06.485 real 0m5.597s 00:17:06.485 user 0m1.624s 00:17:06.485 sys 0m4.056s 00:17:06.485 05:15:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:06.485 05:15:03 -- common/autotest_common.sh@10 -- # set +x 00:17:06.485 ************************************ 00:17:06.485 END TEST nvmf_multipath 00:17:06.485 ************************************ 00:17:06.485 05:15:03 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:17:06.485 05:15:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:06.485 05:15:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.485 05:15:03 -- common/autotest_common.sh@10 -- # set +x 00:17:06.485 ************************************ 00:17:06.485 START TEST nvmf_zcopy 00:17:06.485 ************************************ 00:17:06.485 05:15:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:17:06.485 * Looking for test storage... 00:17:06.485 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:06.485 05:15:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:06.485 05:15:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:06.485 05:15:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:06.745 05:15:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:06.745 05:15:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:06.745 05:15:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:06.745 05:15:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:06.745 05:15:03 -- scripts/common.sh@335 -- # IFS=.-: 00:17:06.745 05:15:03 -- scripts/common.sh@335 -- # read -ra ver1 00:17:06.745 05:15:03 -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.745 05:15:03 -- scripts/common.sh@336 -- # read -ra ver2 00:17:06.745 05:15:03 -- scripts/common.sh@337 -- # local 'op=<' 00:17:06.745 05:15:03 -- scripts/common.sh@339 -- # ver1_l=2 00:17:06.745 05:15:03 -- scripts/common.sh@340 -- # ver2_l=1 00:17:06.745 05:15:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:06.745 05:15:03 -- scripts/common.sh@343 -- # case "$op" in 00:17:06.745 05:15:03 -- scripts/common.sh@344 -- # : 1 00:17:06.745 05:15:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:06.745 05:15:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.745 05:15:03 -- scripts/common.sh@364 -- # decimal 1 00:17:06.745 05:15:03 -- scripts/common.sh@352 -- # local d=1 00:17:06.745 05:15:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.745 05:15:03 -- scripts/common.sh@354 -- # echo 1 00:17:06.745 05:15:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:06.745 05:15:03 -- scripts/common.sh@365 -- # decimal 2 00:17:06.745 05:15:03 -- scripts/common.sh@352 -- # local d=2 00:17:06.745 05:15:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.745 05:15:03 -- scripts/common.sh@354 -- # echo 2 00:17:06.745 05:15:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:06.745 05:15:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:06.745 05:15:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:06.745 05:15:03 -- scripts/common.sh@367 -- # return 0 00:17:06.745 05:15:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.745 05:15:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:06.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.745 --rc genhtml_branch_coverage=1 00:17:06.745 --rc genhtml_function_coverage=1 00:17:06.745 --rc genhtml_legend=1 00:17:06.745 --rc geninfo_all_blocks=1 00:17:06.745 --rc geninfo_unexecuted_blocks=1 00:17:06.745 00:17:06.745 ' 00:17:06.745 05:15:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:06.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.745 --rc genhtml_branch_coverage=1 00:17:06.745 --rc genhtml_function_coverage=1 00:17:06.745 --rc genhtml_legend=1 00:17:06.745 --rc geninfo_all_blocks=1 00:17:06.745 --rc geninfo_unexecuted_blocks=1 00:17:06.745 00:17:06.745 ' 00:17:06.745 05:15:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:06.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.745 --rc genhtml_branch_coverage=1 00:17:06.745 --rc genhtml_function_coverage=1 00:17:06.745 --rc genhtml_legend=1 00:17:06.745 --rc geninfo_all_blocks=1 00:17:06.745 --rc geninfo_unexecuted_blocks=1 00:17:06.745 00:17:06.745 ' 00:17:06.745 05:15:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:06.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.745 --rc genhtml_branch_coverage=1 00:17:06.745 --rc genhtml_function_coverage=1 00:17:06.745 --rc genhtml_legend=1 00:17:06.745 --rc geninfo_all_blocks=1 00:17:06.745 --rc geninfo_unexecuted_blocks=1 00:17:06.745 00:17:06.745 ' 00:17:06.745 05:15:03 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.745 05:15:03 -- nvmf/common.sh@7 -- # uname -s 00:17:06.745 05:15:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.745 05:15:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.745 05:15:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.745 05:15:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.745 05:15:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.745 05:15:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.745 05:15:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.745 05:15:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.745 05:15:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.745 05:15:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.745 05:15:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:06.745 05:15:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:06.745 05:15:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.745 05:15:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.745 05:15:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:06.745 05:15:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:06.745 05:15:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.745 05:15:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.745 05:15:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.745 05:15:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.745 05:15:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.745 05:15:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.745 05:15:03 -- paths/export.sh@5 -- # export PATH 00:17:06.746 05:15:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.746 05:15:03 -- nvmf/common.sh@46 -- # : 0 00:17:06.746 05:15:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:06.746 05:15:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:06.746 05:15:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:06.746 05:15:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.746 05:15:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.746 05:15:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:06.746 05:15:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:06.746 05:15:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:06.746 05:15:03 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:06.746 05:15:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:06.746 05:15:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.746 05:15:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:06.746 05:15:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:06.746 05:15:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:06.746 05:15:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.746 05:15:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.746 05:15:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.746 05:15:03 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:17:06.746 05:15:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:06.746 05:15:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:06.746 05:15:03 -- common/autotest_common.sh@10 -- # set +x 00:17:12.024 05:15:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:12.024 05:15:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:12.024 05:15:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:12.024 05:15:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:12.024 05:15:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:12.024 05:15:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:12.024 05:15:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:12.024 05:15:08 -- nvmf/common.sh@294 -- # net_devs=() 00:17:12.024 05:15:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:12.025 05:15:08 -- nvmf/common.sh@295 -- # e810=() 00:17:12.025 05:15:08 -- nvmf/common.sh@295 -- # local -ga e810 00:17:12.025 05:15:08 -- nvmf/common.sh@296 -- # x722=() 00:17:12.025 05:15:08 -- nvmf/common.sh@296 -- # local -ga x722 00:17:12.025 05:15:08 -- nvmf/common.sh@297 -- # mlx=() 00:17:12.025 05:15:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:12.025 05:15:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.025 05:15:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:12.025 05:15:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:12.025 05:15:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:12.025 05:15:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:12.025 05:15:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:12.025 05:15:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:12.025 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:12.025 05:15:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:12.025 05:15:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:12.025 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:12.025 05:15:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:12.025 05:15:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:12.025 05:15:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:17:12.025 05:15:08 -- nvmf/common.sh@376 -- # modinfo irdma 00:17:12.025 05:15:08 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:17:12.025 05:15:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.025 05:15:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:12.025 05:15:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.025 05:15:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:12.025 Found net devices under 0000:af:00.0: cvl_0_0 00:17:12.025 05:15:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.025 05:15:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.025 05:15:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:12.025 05:15:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.025 05:15:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:12.025 Found net devices under 0000:af:00.1: cvl_0_1 00:17:12.025 05:15:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.025 05:15:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:12.025 05:15:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:12.025 05:15:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:12.025 05:15:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:12.025 05:15:08 -- nvmf/common.sh@57 -- # uname 00:17:12.025 05:15:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:12.025 05:15:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:12.025 05:15:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:12.025 05:15:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:12.025 05:15:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:12.025 05:15:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:12.025 05:15:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:12.025 05:15:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:12.025 05:15:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:12.025 05:15:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:12.025 05:15:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:12.025 05:15:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:12.025 05:15:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:12.025 05:15:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:12.025 05:15:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:12.025 05:15:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:12.025 05:15:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:12.025 05:15:08 -- nvmf/common.sh@104 -- # continue 2 00:17:12.025 05:15:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.025 05:15:08 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:12.025 05:15:08 -- nvmf/common.sh@104 -- # continue 2 00:17:12.025 05:15:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:12.025 05:15:08 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:17:12.025 05:15:08 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:12.025 05:15:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:12.025 05:15:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:12.025 05:15:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:12.025 05:15:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:12.025 05:15:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:17:12.025 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:12.025 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:12.025 altname enp175s0f0np0 00:17:12.025 altname ens801f0np0 00:17:12.025 inet 192.168.100.8/24 scope global cvl_0_0 00:17:12.025 valid_lft forever preferred_lft forever 00:17:12.025 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:12.025 valid_lft forever preferred_lft forever 00:17:12.025 05:15:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:12.025 05:15:08 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:17:12.025 05:15:08 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:12.025 05:15:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:12.025 05:15:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:12.025 05:15:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:12.025 05:15:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:12.025 05:15:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:17:12.025 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:12.025 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:12.025 altname enp175s0f1np1 00:17:12.025 altname ens801f1np1 00:17:12.025 inet 192.168.100.9/24 scope global cvl_0_1 00:17:12.025 valid_lft forever preferred_lft forever 00:17:12.025 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:12.025 valid_lft forever preferred_lft forever 00:17:12.025 05:15:08 -- nvmf/common.sh@410 -- # return 0 00:17:12.025 05:15:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:12.025 05:15:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:12.025 05:15:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:12.025 05:15:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:12.025 05:15:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:12.025 05:15:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:12.025 05:15:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:12.025 05:15:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:12.025 05:15:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:12.285 05:15:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:12.285 05:15:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:12.285 05:15:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.285 05:15:08 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:12.285 05:15:08 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:12.285 05:15:08 -- nvmf/common.sh@104 -- # continue 2 00:17:12.285 05:15:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:12.285 05:15:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.285 05:15:08 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:12.285 05:15:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.285 05:15:08 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:12.285 05:15:08 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:12.285 05:15:08 -- nvmf/common.sh@104 -- # continue 2 00:17:12.285 05:15:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:12.285 05:15:08 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:17:12.285 05:15:08 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:12.285 05:15:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:12.285 05:15:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:12.285 05:15:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:12.285 05:15:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:12.285 05:15:08 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:17:12.285 05:15:08 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:12.286 05:15:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:12.286 05:15:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:12.286 05:15:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:12.286 05:15:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:12.286 192.168.100.9' 00:17:12.286 05:15:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:12.286 192.168.100.9' 00:17:12.286 05:15:08 -- nvmf/common.sh@445 -- # head -n 1 00:17:12.286 05:15:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:12.286 05:15:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:12.286 192.168.100.9' 00:17:12.286 05:15:08 -- nvmf/common.sh@446 -- # tail -n +2 00:17:12.286 05:15:08 -- nvmf/common.sh@446 -- # head -n 1 00:17:12.286 05:15:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:12.286 05:15:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:12.286 05:15:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:12.286 05:15:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:12.286 05:15:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:12.286 05:15:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:12.286 05:15:08 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:12.286 05:15:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:12.286 05:15:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.286 05:15:08 -- common/autotest_common.sh@10 -- # set +x 00:17:12.286 05:15:08 -- nvmf/common.sh@469 -- # nvmfpid=278700 00:17:12.286 05:15:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:12.286 05:15:08 -- nvmf/common.sh@470 -- # waitforlisten 278700 00:17:12.286 05:15:08 -- common/autotest_common.sh@829 -- # '[' -z 278700 ']' 00:17:12.286 05:15:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.286 05:15:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.286 05:15:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.286 05:15:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.286 05:15:08 -- common/autotest_common.sh@10 -- # set +x 00:17:12.286 [2024-11-20 05:15:08.968224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:12.286 [2024-11-20 05:15:08.968279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.286 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.286 [2024-11-20 05:15:09.022848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.286 [2024-11-20 05:15:09.096767] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:12.286 [2024-11-20 05:15:09.096870] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.286 [2024-11-20 05:15:09.096877] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.286 [2024-11-20 05:15:09.096883] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.286 [2024-11-20 05:15:09.096898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.225 05:15:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.225 05:15:09 -- common/autotest_common.sh@862 -- # return 0 00:17:13.225 05:15:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:13.225 05:15:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.225 05:15:09 -- common/autotest_common.sh@10 -- # set +x 00:17:13.225 05:15:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.225 05:15:09 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:17:13.225 05:15:09 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:17:13.225 Unsupported transport: rdma 00:17:13.225 05:15:09 -- target/zcopy.sh@17 -- # exit 0 00:17:13.225 05:15:09 -- target/zcopy.sh@1 -- # process_shm --id 0 00:17:13.225 05:15:09 -- common/autotest_common.sh@806 -- # type=--id 00:17:13.225 05:15:09 -- common/autotest_common.sh@807 -- # id=0 00:17:13.225 05:15:09 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:13.225 05:15:09 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:13.225 05:15:09 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:13.225 05:15:09 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:13.225 05:15:09 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:13.225 05:15:09 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:13.225 nvmf_trace.0 00:17:13.225 05:15:09 -- common/autotest_common.sh@821 -- # return 0 00:17:13.225 05:15:09 -- target/zcopy.sh@1 -- # nvmftestfini 00:17:13.225 05:15:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:13.225 05:15:09 -- nvmf/common.sh@116 -- # sync 00:17:13.225 05:15:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:13.225 05:15:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:13.225 05:15:09 -- nvmf/common.sh@119 -- # set +e 00:17:13.225 05:15:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:13.225 05:15:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:13.225 rmmod nvme_rdma 00:17:13.225 rmmod nvme_fabrics 00:17:13.225 05:15:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:13.225 05:15:09 -- nvmf/common.sh@123 -- # set -e 00:17:13.225 05:15:09 -- nvmf/common.sh@124 -- # return 0 00:17:13.225 05:15:09 -- nvmf/common.sh@477 -- # '[' -n 278700 ']' 00:17:13.225 05:15:09 -- nvmf/common.sh@478 -- # killprocess 278700 00:17:13.225 05:15:09 -- common/autotest_common.sh@936 -- # '[' -z 278700 ']' 00:17:13.225 05:15:09 -- common/autotest_common.sh@940 -- # kill -0 278700 00:17:13.225 05:15:09 -- common/autotest_common.sh@941 -- # uname 00:17:13.225 05:15:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.225 05:15:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 278700 00:17:13.225 05:15:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:13.225 05:15:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:13.225 05:15:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 278700' 00:17:13.225 killing process with pid 278700 00:17:13.225 05:15:09 -- common/autotest_common.sh@955 -- # kill 278700 00:17:13.225 05:15:09 -- common/autotest_common.sh@960 -- # wait 278700 00:17:13.485 05:15:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:13.485 05:15:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:13.485 00:17:13.485 real 0m6.941s 00:17:13.485 user 0m3.138s 00:17:13.485 sys 0m4.444s 00:17:13.485 05:15:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:13.485 05:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:13.485 ************************************ 00:17:13.485 END TEST nvmf_zcopy 00:17:13.485 ************************************ 00:17:13.485 05:15:10 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:17:13.485 05:15:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.485 05:15:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.485 05:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:13.485 ************************************ 00:17:13.485 START TEST nvmf_nmic 00:17:13.485 ************************************ 00:17:13.485 05:15:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:17:13.485 * Looking for test storage... 00:17:13.485 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:13.485 05:15:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:13.485 05:15:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:13.485 05:15:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:13.744 05:15:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:13.744 05:15:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:13.744 05:15:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:13.744 05:15:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:13.744 05:15:10 -- scripts/common.sh@335 -- # IFS=.-: 00:17:13.744 05:15:10 -- scripts/common.sh@335 -- # read -ra ver1 00:17:13.744 05:15:10 -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.744 05:15:10 -- scripts/common.sh@336 -- # read -ra ver2 00:17:13.744 05:15:10 -- scripts/common.sh@337 -- # local 'op=<' 00:17:13.744 05:15:10 -- scripts/common.sh@339 -- # ver1_l=2 00:17:13.744 05:15:10 -- scripts/common.sh@340 -- # ver2_l=1 00:17:13.744 05:15:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:13.744 05:15:10 -- scripts/common.sh@343 -- # case "$op" in 00:17:13.744 05:15:10 -- scripts/common.sh@344 -- # : 1 00:17:13.744 05:15:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:13.744 05:15:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.744 05:15:10 -- scripts/common.sh@364 -- # decimal 1 00:17:13.744 05:15:10 -- scripts/common.sh@352 -- # local d=1 00:17:13.744 05:15:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.744 05:15:10 -- scripts/common.sh@354 -- # echo 1 00:17:13.744 05:15:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:13.744 05:15:10 -- scripts/common.sh@365 -- # decimal 2 00:17:13.744 05:15:10 -- scripts/common.sh@352 -- # local d=2 00:17:13.744 05:15:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.744 05:15:10 -- scripts/common.sh@354 -- # echo 2 00:17:13.744 05:15:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:13.744 05:15:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:13.744 05:15:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:13.744 05:15:10 -- scripts/common.sh@367 -- # return 0 00:17:13.744 05:15:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.744 05:15:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:13.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.744 --rc genhtml_branch_coverage=1 00:17:13.744 --rc genhtml_function_coverage=1 00:17:13.744 --rc genhtml_legend=1 00:17:13.744 --rc geninfo_all_blocks=1 00:17:13.744 --rc geninfo_unexecuted_blocks=1 00:17:13.744 00:17:13.744 ' 00:17:13.744 05:15:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:13.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.744 --rc genhtml_branch_coverage=1 00:17:13.744 --rc genhtml_function_coverage=1 00:17:13.744 --rc genhtml_legend=1 00:17:13.744 --rc geninfo_all_blocks=1 00:17:13.744 --rc geninfo_unexecuted_blocks=1 00:17:13.744 00:17:13.744 ' 00:17:13.744 05:15:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:13.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.744 --rc genhtml_branch_coverage=1 00:17:13.744 --rc genhtml_function_coverage=1 00:17:13.744 --rc genhtml_legend=1 00:17:13.744 --rc geninfo_all_blocks=1 00:17:13.745 --rc geninfo_unexecuted_blocks=1 00:17:13.745 00:17:13.745 ' 00:17:13.745 05:15:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:13.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.745 --rc genhtml_branch_coverage=1 00:17:13.745 --rc genhtml_function_coverage=1 00:17:13.745 --rc genhtml_legend=1 00:17:13.745 --rc geninfo_all_blocks=1 00:17:13.745 --rc geninfo_unexecuted_blocks=1 00:17:13.745 00:17:13.745 ' 00:17:13.745 05:15:10 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.745 05:15:10 -- nvmf/common.sh@7 -- # uname -s 00:17:13.745 05:15:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.745 05:15:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.745 05:15:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.745 05:15:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.745 05:15:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.745 05:15:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.745 05:15:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.745 05:15:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.745 05:15:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.745 05:15:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.745 05:15:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:13.745 05:15:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:13.745 05:15:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.745 05:15:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.745 05:15:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:13.745 05:15:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:13.745 05:15:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.745 05:15:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.745 05:15:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.745 05:15:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.745 05:15:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.745 05:15:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.745 05:15:10 -- paths/export.sh@5 -- # export PATH 00:17:13.745 05:15:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.745 05:15:10 -- nvmf/common.sh@46 -- # : 0 00:17:13.745 05:15:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:13.745 05:15:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:13.745 05:15:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:13.745 05:15:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.745 05:15:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.745 05:15:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:13.745 05:15:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:13.745 05:15:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:13.745 05:15:10 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.745 05:15:10 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.745 05:15:10 -- target/nmic.sh@14 -- # nvmftestinit 00:17:13.745 05:15:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:13.745 05:15:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.745 05:15:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:13.745 05:15:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:13.745 05:15:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:13.745 05:15:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.745 05:15:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.745 05:15:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.745 05:15:10 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:17:13.745 05:15:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:13.745 05:15:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:13.745 05:15:10 -- common/autotest_common.sh@10 -- # set +x 00:17:19.024 05:15:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:19.024 05:15:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:19.024 05:15:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:19.024 05:15:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:19.024 05:15:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:19.024 05:15:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:19.024 05:15:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:19.024 05:15:15 -- nvmf/common.sh@294 -- # net_devs=() 00:17:19.024 05:15:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:19.024 05:15:15 -- nvmf/common.sh@295 -- # e810=() 00:17:19.024 05:15:15 -- nvmf/common.sh@295 -- # local -ga e810 00:17:19.024 05:15:15 -- nvmf/common.sh@296 -- # x722=() 00:17:19.024 05:15:15 -- nvmf/common.sh@296 -- # local -ga x722 00:17:19.024 05:15:15 -- nvmf/common.sh@297 -- # mlx=() 00:17:19.024 05:15:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:19.024 05:15:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.024 05:15:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:19.024 05:15:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:19.024 05:15:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:19.024 05:15:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:19.024 05:15:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:19.024 05:15:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:19.024 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:19.024 05:15:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:19.024 05:15:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:19.024 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:19.024 05:15:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:19.024 05:15:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:19.024 05:15:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:17:19.024 05:15:15 -- nvmf/common.sh@376 -- # modinfo irdma 00:17:19.024 05:15:15 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:17:19.024 05:15:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.024 05:15:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:19.024 05:15:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.024 05:15:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:19.024 Found net devices under 0000:af:00.0: cvl_0_0 00:17:19.024 05:15:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.024 05:15:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.024 05:15:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:19.024 05:15:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.024 05:15:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:19.024 Found net devices under 0000:af:00.1: cvl_0_1 00:17:19.024 05:15:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.024 05:15:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:19.024 05:15:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:19.024 05:15:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:19.024 05:15:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:19.024 05:15:15 -- nvmf/common.sh@57 -- # uname 00:17:19.024 05:15:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:19.024 05:15:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:19.024 05:15:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:19.024 05:15:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:19.024 05:15:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:19.024 05:15:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:19.024 05:15:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:19.024 05:15:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:19.024 05:15:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:19.024 05:15:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:19.024 05:15:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:19.024 05:15:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:19.024 05:15:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:19.024 05:15:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:19.024 05:15:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:19.024 05:15:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:19.024 05:15:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:19.024 05:15:15 -- nvmf/common.sh@104 -- # continue 2 00:17:19.024 05:15:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:19.024 05:15:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:19.024 05:15:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:19.025 05:15:15 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:19.025 05:15:15 -- nvmf/common.sh@104 -- # continue 2 00:17:19.025 05:15:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:19.025 05:15:15 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:17:19.025 05:15:15 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:19.025 05:15:15 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:19.025 05:15:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:19.025 05:15:15 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:17:19.025 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:19.025 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:19.025 altname enp175s0f0np0 00:17:19.025 altname ens801f0np0 00:17:19.025 inet 192.168.100.8/24 scope global cvl_0_0 00:17:19.025 valid_lft forever preferred_lft forever 00:17:19.025 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:19.025 valid_lft forever preferred_lft forever 00:17:19.025 05:15:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:19.025 05:15:15 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:17:19.025 05:15:15 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:19.025 05:15:15 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:19.025 05:15:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:19.025 05:15:15 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:17:19.025 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:19.025 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:19.025 altname enp175s0f1np1 00:17:19.025 altname ens801f1np1 00:17:19.025 inet 192.168.100.9/24 scope global cvl_0_1 00:17:19.025 valid_lft forever preferred_lft forever 00:17:19.025 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:19.025 valid_lft forever preferred_lft forever 00:17:19.025 05:15:15 -- nvmf/common.sh@410 -- # return 0 00:17:19.025 05:15:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:19.025 05:15:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:19.025 05:15:15 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:19.025 05:15:15 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:19.025 05:15:15 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:19.025 05:15:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:19.025 05:15:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:19.025 05:15:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:19.025 05:15:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:19.025 05:15:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:19.025 05:15:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:19.025 05:15:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:19.025 05:15:15 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:19.025 05:15:15 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:19.025 05:15:15 -- nvmf/common.sh@104 -- # continue 2 00:17:19.025 05:15:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:19.025 05:15:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:19.025 05:15:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:19.025 05:15:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:19.025 05:15:15 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:19.025 05:15:15 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:19.025 05:15:15 -- nvmf/common.sh@104 -- # continue 2 00:17:19.025 05:15:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:19.025 05:15:15 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:17:19.025 05:15:15 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:19.025 05:15:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:19.025 05:15:15 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:17:19.025 05:15:15 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:19.025 05:15:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:19.025 05:15:15 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:19.025 192.168.100.9' 00:17:19.025 05:15:15 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:19.025 192.168.100.9' 00:17:19.025 05:15:15 -- nvmf/common.sh@445 -- # head -n 1 00:17:19.025 05:15:15 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:19.025 05:15:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:19.025 192.168.100.9' 00:17:19.025 05:15:15 -- nvmf/common.sh@446 -- # tail -n +2 00:17:19.025 05:15:15 -- nvmf/common.sh@446 -- # head -n 1 00:17:19.025 05:15:15 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:19.025 05:15:15 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:19.025 05:15:15 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:19.025 05:15:15 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:19.025 05:15:15 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:19.025 05:15:15 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:19.025 05:15:15 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:19.025 05:15:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:19.025 05:15:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:19.025 05:15:15 -- common/autotest_common.sh@10 -- # set +x 00:17:19.025 05:15:15 -- nvmf/common.sh@469 -- # nvmfpid=282003 00:17:19.025 05:15:15 -- nvmf/common.sh@470 -- # waitforlisten 282003 00:17:19.025 05:15:15 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:19.025 05:15:15 -- common/autotest_common.sh@829 -- # '[' -z 282003 ']' 00:17:19.025 05:15:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.025 05:15:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.025 05:15:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.025 05:15:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.025 05:15:15 -- common/autotest_common.sh@10 -- # set +x 00:17:19.025 [2024-11-20 05:15:15.814056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:19.025 [2024-11-20 05:15:15.814097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.025 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.285 [2024-11-20 05:15:15.868420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.285 [2024-11-20 05:15:15.943915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:19.285 [2024-11-20 05:15:15.944026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.285 [2024-11-20 05:15:15.944033] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.285 [2024-11-20 05:15:15.944039] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.285 [2024-11-20 05:15:15.944101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.285 [2024-11-20 05:15:15.944144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.285 [2024-11-20 05:15:15.944242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.285 [2024-11-20 05:15:15.944243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.854 05:15:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.854 05:15:16 -- common/autotest_common.sh@862 -- # return 0 00:17:19.854 05:15:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:19.854 05:15:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.854 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:19.854 05:15:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.854 05:15:16 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:19.854 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.854 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 [2024-11-20 05:15:16.683355] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xec3100/0xec2740) succeed. 00:17:20.115 [2024-11-20 05:15:16.692134] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xec4470/0xec2cc0) succeed. 00:17:20.115 [2024-11-20 05:15:16.692156] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:20.115 05:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.115 05:15:16 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:20.115 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.115 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 Malloc0 00:17:20.115 05:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.115 05:15:16 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:20.115 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.115 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 05:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.115 05:15:16 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:20.115 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.115 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 05:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.115 05:15:16 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:20.115 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.115 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 [2024-11-20 05:15:16.751580] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:20.115 05:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.115 05:15:16 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:20.115 test case1: single bdev can't be used in multiple subsystems 00:17:20.115 05:15:16 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:20.115 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.115 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 05:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.115 05:15:16 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:17:20.115 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.115 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 05:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.115 05:15:16 -- target/nmic.sh@28 -- # nmic_status=0 00:17:20.115 05:15:16 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:20.115 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.115 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 [2024-11-20 05:15:16.775609] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:20.115 [2024-11-20 05:15:16.775626] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:20.115 [2024-11-20 05:15:16.775633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.115 request: 00:17:20.115 { 00:17:20.115 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:20.115 "namespace": { 00:17:20.115 "bdev_name": "Malloc0" 00:17:20.115 }, 00:17:20.115 "method": "nvmf_subsystem_add_ns", 00:17:20.115 "req_id": 1 00:17:20.115 } 00:17:20.115 Got JSON-RPC error response 00:17:20.115 response: 00:17:20.115 { 00:17:20.115 "code": -32602, 00:17:20.115 "message": "Invalid parameters" 00:17:20.115 } 00:17:20.115 05:15:16 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:20.115 05:15:16 -- target/nmic.sh@29 -- # nmic_status=1 00:17:20.115 05:15:16 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:20.115 05:15:16 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:20.115 Adding namespace failed - expected result. 00:17:20.115 05:15:16 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:20.115 test case2: host connect to nvmf target in multiple paths 00:17:20.115 05:15:16 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:17:20.116 05:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.116 05:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:20.116 [2024-11-20 05:15:16.787652] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:20.116 05:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.116 05:15:16 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:20.376 05:15:17 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:17:20.635 05:15:17 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.635 05:15:17 -- common/autotest_common.sh@1187 -- # local i=0 00:17:20.635 05:15:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.635 05:15:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:20.635 05:15:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:22.543 05:15:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:22.543 05:15:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:22.543 05:15:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.543 05:15:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:22.543 05:15:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.543 05:15:19 -- common/autotest_common.sh@1197 -- # return 0 00:17:22.543 05:15:19 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:22.543 [global] 00:17:22.543 thread=1 00:17:22.543 invalidate=1 00:17:22.543 rw=write 00:17:22.543 time_based=1 00:17:22.543 runtime=1 00:17:22.544 ioengine=libaio 00:17:22.544 direct=1 00:17:22.544 bs=4096 00:17:22.544 iodepth=1 00:17:22.544 norandommap=0 00:17:22.544 numjobs=1 00:17:22.544 00:17:22.544 verify_dump=1 00:17:22.544 verify_backlog=512 00:17:22.544 verify_state_save=0 00:17:22.544 do_verify=1 00:17:22.544 verify=crc32c-intel 00:17:22.544 [job0] 00:17:22.544 filename=/dev/nvme0n1 00:17:22.544 Could not set queue depth (nvme0n1) 00:17:23.114 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.114 fio-3.35 00:17:23.114 Starting 1 thread 00:17:24.495 00:17:24.495 job0: (groupid=0, jobs=1): err= 0: pid=282796: Wed Nov 20 05:15:20 2024 00:17:24.495 read: IOPS=6383, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1001msec) 00:17:24.495 slat (nsec): min=6620, max=37401, avg=7597.19, stdev=909.72 00:17:24.495 clat (usec): min=47, max=137, avg=66.59, stdev= 4.30 00:17:24.495 lat (usec): min=62, max=146, avg=74.19, stdev= 4.38 00:17:24.495 clat percentiles (usec): 00:17:24.495 | 1.00th=[ 60], 5.00th=[ 62], 10.00th=[ 62], 20.00th=[ 64], 00:17:24.495 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 68], 00:17:24.495 | 70.00th=[ 69], 80.00th=[ 71], 90.00th=[ 72], 95.00th=[ 74], 00:17:24.495 | 99.00th=[ 77], 99.50th=[ 80], 99.90th=[ 103], 99.95th=[ 120], 00:17:24.495 | 99.99th=[ 139] 00:17:24.495 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:17:24.495 slat (nsec): min=8866, max=46022, avg=9655.88, stdev=1025.67 00:17:24.495 clat (usec): min=53, max=402, avg=65.20, stdev= 6.36 00:17:24.495 lat (usec): min=65, max=411, avg=74.86, stdev= 6.49 00:17:24.495 clat percentiles (usec): 00:17:24.495 | 1.00th=[ 59], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:17:24.495 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 67], 00:17:24.495 | 70.00th=[ 68], 80.00th=[ 69], 90.00th=[ 71], 95.00th=[ 73], 00:17:24.495 | 99.00th=[ 77], 99.50th=[ 80], 99.90th=[ 108], 99.95th=[ 119], 00:17:24.495 | 99.99th=[ 404] 00:17:24.495 bw ( KiB/s): min=27672, max=27672, per=100.00%, avg=27672.00, stdev= 0.00, samples=1 00:17:24.495 iops : min= 6918, max= 6918, avg=6918.00, stdev= 0.00, samples=1 00:17:24.495 lat (usec) : 50=0.02%, 100=99.85%, 250=0.12%, 500=0.01% 00:17:24.495 cpu : usr=8.70%, sys=13.70%, ctx=13046, majf=0, minf=1 00:17:24.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.495 issued rwts: total=6390,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.495 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.495 00:17:24.495 Run status group 0 (all jobs): 00:17:24.495 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=25.0MiB (26.2MB), run=1001-1001msec 00:17:24.495 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:17:24.495 00:17:24.495 Disk stats (read/write): 00:17:24.495 nvme0n1: ios=5682/6092, merge=0/0, ticks=339/357, in_queue=696, util=90.78% 00:17:24.495 05:15:20 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:25.876 05:15:22 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.876 05:15:22 -- common/autotest_common.sh@1208 -- # local i=0 00:17:25.876 05:15:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:25.876 05:15:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.136 05:15:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:26.136 05:15:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.136 05:15:22 -- common/autotest_common.sh@1220 -- # return 0 00:17:26.136 05:15:22 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:26.136 05:15:22 -- target/nmic.sh@53 -- # nvmftestfini 00:17:26.136 05:15:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:26.136 05:15:22 -- nvmf/common.sh@116 -- # sync 00:17:26.136 05:15:22 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:26.136 05:15:22 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:26.136 05:15:22 -- nvmf/common.sh@119 -- # set +e 00:17:26.136 05:15:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:26.136 05:15:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:26.136 rmmod nvme_rdma 00:17:26.136 rmmod nvme_fabrics 00:17:26.136 05:15:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:26.136 05:15:22 -- nvmf/common.sh@123 -- # set -e 00:17:26.136 05:15:22 -- nvmf/common.sh@124 -- # return 0 00:17:26.136 05:15:22 -- nvmf/common.sh@477 -- # '[' -n 282003 ']' 00:17:26.136 05:15:22 -- nvmf/common.sh@478 -- # killprocess 282003 00:17:26.136 05:15:22 -- common/autotest_common.sh@936 -- # '[' -z 282003 ']' 00:17:26.136 05:15:22 -- common/autotest_common.sh@940 -- # kill -0 282003 00:17:26.136 05:15:22 -- common/autotest_common.sh@941 -- # uname 00:17:26.136 05:15:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.136 05:15:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 282003 00:17:26.136 05:15:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:26.136 05:15:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:26.136 05:15:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 282003' 00:17:26.136 killing process with pid 282003 00:17:26.136 05:15:22 -- common/autotest_common.sh@955 -- # kill 282003 00:17:26.136 05:15:22 -- common/autotest_common.sh@960 -- # wait 282003 00:17:26.397 05:15:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:26.397 05:15:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:26.397 00:17:26.397 real 0m12.885s 00:17:26.397 user 0m35.839s 00:17:26.397 sys 0m5.119s 00:17:26.397 05:15:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:26.397 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:17:26.397 ************************************ 00:17:26.397 END TEST nvmf_nmic 00:17:26.397 ************************************ 00:17:26.397 05:15:23 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:17:26.397 05:15:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:26.397 05:15:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.397 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:17:26.397 ************************************ 00:17:26.397 START TEST nvmf_fio_target 00:17:26.397 ************************************ 00:17:26.397 05:15:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:17:26.397 * Looking for test storage... 00:17:26.397 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:26.397 05:15:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:26.397 05:15:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:26.397 05:15:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:26.657 05:15:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:26.657 05:15:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:26.657 05:15:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:26.657 05:15:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:26.657 05:15:23 -- scripts/common.sh@335 -- # IFS=.-: 00:17:26.657 05:15:23 -- scripts/common.sh@335 -- # read -ra ver1 00:17:26.657 05:15:23 -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.657 05:15:23 -- scripts/common.sh@336 -- # read -ra ver2 00:17:26.657 05:15:23 -- scripts/common.sh@337 -- # local 'op=<' 00:17:26.657 05:15:23 -- scripts/common.sh@339 -- # ver1_l=2 00:17:26.657 05:15:23 -- scripts/common.sh@340 -- # ver2_l=1 00:17:26.657 05:15:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:26.658 05:15:23 -- scripts/common.sh@343 -- # case "$op" in 00:17:26.658 05:15:23 -- scripts/common.sh@344 -- # : 1 00:17:26.658 05:15:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:26.658 05:15:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.658 05:15:23 -- scripts/common.sh@364 -- # decimal 1 00:17:26.658 05:15:23 -- scripts/common.sh@352 -- # local d=1 00:17:26.658 05:15:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.658 05:15:23 -- scripts/common.sh@354 -- # echo 1 00:17:26.658 05:15:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:26.658 05:15:23 -- scripts/common.sh@365 -- # decimal 2 00:17:26.658 05:15:23 -- scripts/common.sh@352 -- # local d=2 00:17:26.658 05:15:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.658 05:15:23 -- scripts/common.sh@354 -- # echo 2 00:17:26.658 05:15:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:26.658 05:15:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.658 05:15:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:26.658 05:15:23 -- scripts/common.sh@367 -- # return 0 00:17:26.658 05:15:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.658 05:15:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:26.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.658 --rc genhtml_branch_coverage=1 00:17:26.658 --rc genhtml_function_coverage=1 00:17:26.658 --rc genhtml_legend=1 00:17:26.658 --rc geninfo_all_blocks=1 00:17:26.658 --rc geninfo_unexecuted_blocks=1 00:17:26.658 00:17:26.658 ' 00:17:26.658 05:15:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:26.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.658 --rc genhtml_branch_coverage=1 00:17:26.658 --rc genhtml_function_coverage=1 00:17:26.658 --rc genhtml_legend=1 00:17:26.658 --rc geninfo_all_blocks=1 00:17:26.658 --rc geninfo_unexecuted_blocks=1 00:17:26.658 00:17:26.658 ' 00:17:26.658 05:15:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:26.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.658 --rc genhtml_branch_coverage=1 00:17:26.658 --rc genhtml_function_coverage=1 00:17:26.658 --rc genhtml_legend=1 00:17:26.658 --rc geninfo_all_blocks=1 00:17:26.658 --rc geninfo_unexecuted_blocks=1 00:17:26.658 00:17:26.658 ' 00:17:26.658 05:15:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:26.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.658 --rc genhtml_branch_coverage=1 00:17:26.658 --rc genhtml_function_coverage=1 00:17:26.658 --rc genhtml_legend=1 00:17:26.658 --rc geninfo_all_blocks=1 00:17:26.658 --rc geninfo_unexecuted_blocks=1 00:17:26.658 00:17:26.658 ' 00:17:26.658 05:15:23 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.658 05:15:23 -- nvmf/common.sh@7 -- # uname -s 00:17:26.658 05:15:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.658 05:15:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.658 05:15:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.658 05:15:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.658 05:15:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.658 05:15:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.658 05:15:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.658 05:15:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.658 05:15:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.658 05:15:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.658 05:15:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:26.658 05:15:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:26.658 05:15:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.658 05:15:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.658 05:15:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:26.658 05:15:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:26.658 05:15:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.658 05:15:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.658 05:15:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.658 05:15:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.658 05:15:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.658 05:15:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.658 05:15:23 -- paths/export.sh@5 -- # export PATH 00:17:26.658 05:15:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.658 05:15:23 -- nvmf/common.sh@46 -- # : 0 00:17:26.658 05:15:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:26.658 05:15:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:26.658 05:15:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:26.658 05:15:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.658 05:15:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.658 05:15:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:26.658 05:15:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:26.658 05:15:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:26.658 05:15:23 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.658 05:15:23 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.658 05:15:23 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:17:26.658 05:15:23 -- target/fio.sh@16 -- # nvmftestinit 00:17:26.658 05:15:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:26.658 05:15:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.658 05:15:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:26.658 05:15:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:26.658 05:15:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:26.658 05:15:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.658 05:15:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.658 05:15:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.658 05:15:23 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:17:26.658 05:15:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:26.658 05:15:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:26.658 05:15:23 -- common/autotest_common.sh@10 -- # set +x 00:17:31.936 05:15:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:31.936 05:15:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:31.936 05:15:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:31.936 05:15:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:31.936 05:15:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:31.936 05:15:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:31.936 05:15:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:31.936 05:15:28 -- nvmf/common.sh@294 -- # net_devs=() 00:17:31.936 05:15:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:31.936 05:15:28 -- nvmf/common.sh@295 -- # e810=() 00:17:31.936 05:15:28 -- nvmf/common.sh@295 -- # local -ga e810 00:17:31.936 05:15:28 -- nvmf/common.sh@296 -- # x722=() 00:17:31.936 05:15:28 -- nvmf/common.sh@296 -- # local -ga x722 00:17:31.936 05:15:28 -- nvmf/common.sh@297 -- # mlx=() 00:17:31.936 05:15:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:31.936 05:15:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.936 05:15:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:31.936 05:15:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:31.936 05:15:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:31.936 05:15:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:31.936 05:15:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:31.936 05:15:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:31.936 05:15:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:31.936 05:15:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:31.936 05:15:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:31.936 05:15:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:31.936 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:31.936 05:15:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:31.936 05:15:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:31.937 05:15:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:31.937 05:15:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:31.937 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:31.937 05:15:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:31.937 05:15:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:31.937 05:15:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:17:31.937 05:15:28 -- nvmf/common.sh@376 -- # modinfo irdma 00:17:31.937 05:15:28 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:17:31.937 05:15:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:31.937 05:15:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.937 05:15:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:31.937 05:15:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.937 05:15:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:31.937 Found net devices under 0000:af:00.0: cvl_0_0 00:17:31.937 05:15:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.937 05:15:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:31.937 05:15:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.937 05:15:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:31.937 05:15:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.937 05:15:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:31.937 Found net devices under 0000:af:00.1: cvl_0_1 00:17:31.937 05:15:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.937 05:15:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:31.937 05:15:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:31.937 05:15:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:31.937 05:15:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:31.937 05:15:28 -- nvmf/common.sh@57 -- # uname 00:17:31.937 05:15:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:31.937 05:15:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:31.937 05:15:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:31.937 05:15:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:31.937 05:15:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:31.937 05:15:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:31.937 05:15:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:31.937 05:15:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:31.937 05:15:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:31.937 05:15:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:31.937 05:15:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:31.937 05:15:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:31.937 05:15:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:31.937 05:15:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:31.937 05:15:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:31.937 05:15:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:31.937 05:15:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:31.937 05:15:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:31.937 05:15:28 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:31.937 05:15:28 -- nvmf/common.sh@104 -- # continue 2 00:17:31.937 05:15:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:31.937 05:15:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:31.937 05:15:28 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:31.937 05:15:28 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:31.937 05:15:28 -- nvmf/common.sh@104 -- # continue 2 00:17:31.937 05:15:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:31.937 05:15:28 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:17:31.937 05:15:28 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:31.937 05:15:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:31.937 05:15:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:31.937 05:15:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:31.937 05:15:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:31.937 05:15:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:17:31.937 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:31.937 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:31.937 altname enp175s0f0np0 00:17:31.937 altname ens801f0np0 00:17:31.937 inet 192.168.100.8/24 scope global cvl_0_0 00:17:31.937 valid_lft forever preferred_lft forever 00:17:31.937 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:31.937 valid_lft forever preferred_lft forever 00:17:31.937 05:15:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:31.937 05:15:28 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:17:31.937 05:15:28 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:31.937 05:15:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:31.937 05:15:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:31.937 05:15:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:31.937 05:15:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:31.937 05:15:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:17:31.937 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:31.937 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:31.937 altname enp175s0f1np1 00:17:31.937 altname ens801f1np1 00:17:31.937 inet 192.168.100.9/24 scope global cvl_0_1 00:17:31.937 valid_lft forever preferred_lft forever 00:17:31.937 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:31.937 valid_lft forever preferred_lft forever 00:17:31.937 05:15:28 -- nvmf/common.sh@410 -- # return 0 00:17:31.937 05:15:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:31.937 05:15:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:31.937 05:15:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:31.937 05:15:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:31.937 05:15:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:31.937 05:15:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:31.937 05:15:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:31.937 05:15:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:31.937 05:15:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:31.938 05:15:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:31.938 05:15:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:31.938 05:15:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:31.938 05:15:28 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:31.938 05:15:28 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:31.938 05:15:28 -- nvmf/common.sh@104 -- # continue 2 00:17:31.938 05:15:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:31.938 05:15:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:31.938 05:15:28 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:31.938 05:15:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:31.938 05:15:28 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:31.938 05:15:28 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:31.938 05:15:28 -- nvmf/common.sh@104 -- # continue 2 00:17:31.938 05:15:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:31.938 05:15:28 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:17:31.938 05:15:28 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:31.938 05:15:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:31.938 05:15:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:31.938 05:15:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:31.938 05:15:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:31.938 05:15:28 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:17:31.938 05:15:28 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:31.938 05:15:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:31.938 05:15:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:31.938 05:15:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:31.938 05:15:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:31.938 192.168.100.9' 00:17:31.938 05:15:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:31.938 192.168.100.9' 00:17:31.938 05:15:28 -- nvmf/common.sh@445 -- # head -n 1 00:17:31.938 05:15:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:31.938 05:15:28 -- nvmf/common.sh@446 -- # tail -n +2 00:17:31.938 05:15:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:31.938 192.168.100.9' 00:17:31.938 05:15:28 -- nvmf/common.sh@446 -- # head -n 1 00:17:31.938 05:15:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:31.938 05:15:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:31.938 05:15:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:31.938 05:15:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:31.938 05:15:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:31.938 05:15:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:31.938 05:15:28 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:31.938 05:15:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:31.938 05:15:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:31.938 05:15:28 -- common/autotest_common.sh@10 -- # set +x 00:17:31.938 05:15:28 -- nvmf/common.sh@469 -- # nvmfpid=286371 00:17:31.938 05:15:28 -- nvmf/common.sh@470 -- # waitforlisten 286371 00:17:31.938 05:15:28 -- common/autotest_common.sh@829 -- # '[' -z 286371 ']' 00:17:31.938 05:15:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.938 05:15:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:31.938 05:15:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.938 05:15:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.938 05:15:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.938 05:15:28 -- common/autotest_common.sh@10 -- # set +x 00:17:31.938 [2024-11-20 05:15:28.715620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:31.938 [2024-11-20 05:15:28.715663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.938 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.198 [2024-11-20 05:15:28.771011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.198 [2024-11-20 05:15:28.840220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:32.198 [2024-11-20 05:15:28.840330] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.198 [2024-11-20 05:15:28.840337] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.198 [2024-11-20 05:15:28.840343] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.198 [2024-11-20 05:15:28.840435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.198 [2024-11-20 05:15:28.840531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.198 [2024-11-20 05:15:28.840601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.198 [2024-11-20 05:15:28.840602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.767 05:15:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.767 05:15:29 -- common/autotest_common.sh@862 -- # return 0 00:17:32.767 05:15:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:32.767 05:15:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.767 05:15:29 -- common/autotest_common.sh@10 -- # set +x 00:17:32.767 05:15:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.767 05:15:29 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:33.027 [2024-11-20 05:15:29.767016] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1bff100/0x1bfe740) succeed. 00:17:33.027 [2024-11-20 05:15:29.776079] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1c00470/0x1bfecc0) succeed. 00:17:33.027 [2024-11-20 05:15:29.776102] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:33.027 05:15:29 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:33.286 05:15:30 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:33.286 05:15:30 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:33.546 05:15:30 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:33.546 05:15:30 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:33.805 05:15:30 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:33.805 05:15:30 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:33.805 05:15:30 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:33.805 05:15:30 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:34.065 05:15:30 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:34.324 05:15:31 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:34.324 05:15:31 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:34.583 05:15:31 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:34.584 05:15:31 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:34.584 05:15:31 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:34.843 05:15:31 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:34.843 05:15:31 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:35.102 05:15:31 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:35.103 05:15:31 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.362 05:15:31 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:35.362 05:15:31 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:35.362 05:15:32 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:35.621 [2024-11-20 05:15:32.346456] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:35.621 05:15:32 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:35.880 05:15:32 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:36.140 05:15:32 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:36.140 05:15:32 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:36.140 05:15:32 -- common/autotest_common.sh@1187 -- # local i=0 00:17:36.140 05:15:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:36.140 05:15:32 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:17:36.140 05:15:32 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:17:36.140 05:15:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:38.771 05:15:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:38.771 05:15:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:38.771 05:15:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.771 05:15:34 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:17:38.771 05:15:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.771 05:15:34 -- common/autotest_common.sh@1197 -- # return 0 00:17:38.771 05:15:34 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:38.771 [global] 00:17:38.771 thread=1 00:17:38.771 invalidate=1 00:17:38.771 rw=write 00:17:38.771 time_based=1 00:17:38.771 runtime=1 00:17:38.771 ioengine=libaio 00:17:38.771 direct=1 00:17:38.771 bs=4096 00:17:38.771 iodepth=1 00:17:38.771 norandommap=0 00:17:38.771 numjobs=1 00:17:38.771 00:17:38.771 verify_dump=1 00:17:38.771 verify_backlog=512 00:17:38.771 verify_state_save=0 00:17:38.771 do_verify=1 00:17:38.771 verify=crc32c-intel 00:17:38.771 [job0] 00:17:38.771 filename=/dev/nvme0n1 00:17:38.771 [job1] 00:17:38.771 filename=/dev/nvme0n2 00:17:38.771 [job2] 00:17:38.771 filename=/dev/nvme0n3 00:17:38.771 [job3] 00:17:38.771 filename=/dev/nvme0n4 00:17:38.771 Could not set queue depth (nvme0n1) 00:17:38.771 Could not set queue depth (nvme0n2) 00:17:38.771 Could not set queue depth (nvme0n3) 00:17:38.771 Could not set queue depth (nvme0n4) 00:17:38.771 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:38.771 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:38.771 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:38.771 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:38.771 fio-3.35 00:17:38.771 Starting 4 threads 00:17:39.761 00:17:39.761 job0: (groupid=0, jobs=1): err= 0: pid=287724: Wed Nov 20 05:15:36 2024 00:17:39.761 read: IOPS=4860, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec) 00:17:39.761 slat (nsec): min=6259, max=18266, avg=7459.78, stdev=744.48 00:17:39.761 clat (usec): min=72, max=154, avg=90.57, stdev=12.54 00:17:39.761 lat (usec): min=79, max=161, avg=98.03, stdev=12.54 00:17:39.761 clat percentiles (usec): 00:17:39.761 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 83], 00:17:39.761 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 89], 00:17:39.761 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 124], 00:17:39.761 | 99.00th=[ 137], 99.50th=[ 141], 99.90th=[ 147], 99.95th=[ 149], 00:17:39.761 | 99.99th=[ 155] 00:17:39.761 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:17:39.761 slat (nsec): min=8460, max=44529, avg=9553.87, stdev=1202.82 00:17:39.761 clat (usec): min=62, max=220, avg=88.51, stdev=12.25 00:17:39.761 lat (usec): min=80, max=229, avg=98.06, stdev=12.29 00:17:39.761 clat percentiles (usec): 00:17:39.761 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 81], 00:17:39.761 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 87], 00:17:39.761 | 70.00th=[ 89], 80.00th=[ 92], 90.00th=[ 104], 95.00th=[ 120], 00:17:39.761 | 99.00th=[ 133], 99.50th=[ 137], 99.90th=[ 147], 99.95th=[ 155], 00:17:39.761 | 99.99th=[ 221] 00:17:39.761 bw ( KiB/s): min=21016, max=21016, per=28.61%, avg=21016.00, stdev= 0.00, samples=1 00:17:39.761 iops : min= 5254, max= 5254, avg=5254.00, stdev= 0.00, samples=1 00:17:39.761 lat (usec) : 100=88.35%, 250=11.65% 00:17:39.761 cpu : usr=5.80%, sys=11.20%, ctx=9986, majf=0, minf=1 00:17:39.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.761 issued rwts: total=4865,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.761 job1: (groupid=0, jobs=1): err= 0: pid=287725: Wed Nov 20 05:15:36 2024 00:17:39.761 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:17:39.761 slat (nsec): min=6309, max=24036, avg=7498.00, stdev=838.29 00:17:39.761 clat (usec): min=71, max=175, avg=106.72, stdev=20.94 00:17:39.761 lat (usec): min=80, max=183, avg=114.22, stdev=20.88 00:17:39.761 clat percentiles (usec): 00:17:39.761 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 86], 00:17:39.761 | 30.00th=[ 89], 40.00th=[ 93], 50.00th=[ 105], 60.00th=[ 117], 00:17:39.761 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 141], 00:17:39.761 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 172], 00:17:39.761 | 99.99th=[ 176] 00:17:39.761 write: IOPS=4552, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1001msec); 0 zone resets 00:17:39.761 slat (nsec): min=8444, max=41539, avg=9597.94, stdev=1204.62 00:17:39.761 clat (usec): min=57, max=961, avg=103.23, stdev=25.01 00:17:39.761 lat (usec): min=80, max=970, avg=112.83, stdev=25.00 00:17:39.761 clat percentiles (usec): 00:17:39.761 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 84], 00:17:39.761 | 30.00th=[ 87], 40.00th=[ 90], 50.00th=[ 98], 60.00th=[ 111], 00:17:39.761 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 131], 95.00th=[ 137], 00:17:39.761 | 99.00th=[ 147], 99.50th=[ 151], 99.90th=[ 192], 99.95th=[ 326], 00:17:39.761 | 99.99th=[ 963] 00:17:39.761 bw ( KiB/s): min=17072, max=17072, per=23.24%, avg=17072.00, stdev= 0.00, samples=1 00:17:39.761 iops : min= 4268, max= 4268, avg=4268.00, stdev= 0.00, samples=1 00:17:39.761 lat (usec) : 100=49.22%, 250=50.75%, 500=0.01%, 750=0.01%, 1000=0.01% 00:17:39.761 cpu : usr=5.30%, sys=9.50%, ctx=8653, majf=0, minf=1 00:17:39.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.761 issued rwts: total=4096,4557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.761 job2: (groupid=0, jobs=1): err= 0: pid=287726: Wed Nov 20 05:15:36 2024 00:17:39.762 read: IOPS=3970, BW=15.5MiB/s (16.3MB/s)(15.5MiB/1001msec) 00:17:39.762 slat (nsec): min=6444, max=34691, avg=7606.20, stdev=868.71 00:17:39.762 clat (usec): min=79, max=204, avg=113.85, stdev=16.53 00:17:39.762 lat (usec): min=88, max=211, avg=121.46, stdev=16.52 00:17:39.762 clat percentiles (usec): 00:17:39.762 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 96], 00:17:39.762 | 30.00th=[ 101], 40.00th=[ 111], 50.00th=[ 117], 60.00th=[ 121], 00:17:39.762 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 141], 00:17:39.762 | 99.00th=[ 151], 99.50th=[ 153], 99.90th=[ 161], 99.95th=[ 176], 00:17:39.762 | 99.99th=[ 204] 00:17:39.762 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:17:39.762 slat (nsec): min=8359, max=45065, avg=9670.05, stdev=926.53 00:17:39.762 clat (usec): min=78, max=197, avg=112.47, stdev=16.07 00:17:39.762 lat (usec): min=88, max=207, avg=122.14, stdev=16.11 00:17:39.762 clat percentiles (usec): 00:17:39.762 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 95], 00:17:39.762 | 30.00th=[ 102], 40.00th=[ 110], 50.00th=[ 115], 60.00th=[ 119], 00:17:39.762 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 139], 00:17:39.762 | 99.00th=[ 147], 99.50th=[ 151], 99.90th=[ 163], 99.95th=[ 176], 00:17:39.762 | 99.99th=[ 198] 00:17:39.762 bw ( KiB/s): min=16384, max=16384, per=22.31%, avg=16384.00, stdev= 0.00, samples=1 00:17:39.762 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:17:39.762 lat (usec) : 100=28.03%, 250=71.97% 00:17:39.762 cpu : usr=3.70%, sys=10.30%, ctx=8070, majf=0, minf=1 00:17:39.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.762 issued rwts: total=3974,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.762 job3: (groupid=0, jobs=1): err= 0: pid=287727: Wed Nov 20 05:15:36 2024 00:17:39.762 read: IOPS=4373, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1001msec) 00:17:39.762 slat (nsec): min=6532, max=23790, avg=7635.30, stdev=722.11 00:17:39.762 clat (usec): min=81, max=298, avg=101.81, stdev= 7.70 00:17:39.762 lat (usec): min=89, max=305, avg=109.44, stdev= 7.74 00:17:39.762 clat percentiles (usec): 00:17:39.762 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 96], 00:17:39.762 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 103], 00:17:39.762 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 116], 00:17:39.762 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 137], 00:17:39.762 | 99.99th=[ 297] 00:17:39.762 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:17:39.762 slat (nsec): min=8645, max=44354, avg=9704.15, stdev=952.40 00:17:39.762 clat (usec): min=80, max=288, avg=99.16, stdev= 8.68 00:17:39.762 lat (usec): min=90, max=298, avg=108.86, stdev= 8.75 00:17:39.762 clat percentiles (usec): 00:17:39.762 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 93], 00:17:39.762 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 98], 60.00th=[ 100], 00:17:39.762 | 70.00th=[ 102], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 113], 00:17:39.762 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 137], 99.95th=[ 253], 00:17:39.762 | 99.99th=[ 289] 00:17:39.762 bw ( KiB/s): min=19688, max=19688, per=26.80%, avg=19688.00, stdev= 0.00, samples=1 00:17:39.762 iops : min= 4922, max= 4922, avg=4922.00, stdev= 0.00, samples=1 00:17:39.762 lat (usec) : 100=52.15%, 250=47.81%, 500=0.04% 00:17:39.762 cpu : usr=5.60%, sys=10.00%, ctx=8986, majf=0, minf=1 00:17:39.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.762 issued rwts: total=4378,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.762 00:17:39.762 Run status group 0 (all jobs): 00:17:39.762 READ: bw=67.6MiB/s (70.8MB/s), 15.5MiB/s-19.0MiB/s (16.3MB/s-19.9MB/s), io=67.6MiB (70.9MB), run=1001-1001msec 00:17:39.762 WRITE: bw=71.7MiB/s (75.2MB/s), 16.0MiB/s-20.0MiB/s (16.8MB/s-20.9MB/s), io=71.8MiB (75.3MB), run=1001-1001msec 00:17:39.762 00:17:39.762 Disk stats (read/write): 00:17:39.762 nvme0n1: ios=4229/4608, merge=0/0, ticks=339/362, in_queue=701, util=85.97% 00:17:39.762 nvme0n2: ios=3573/3584, merge=0/0, ticks=368/355, in_queue=723, util=86.57% 00:17:39.762 nvme0n3: ios=3354/3584, merge=0/0, ticks=359/377, in_queue=736, util=88.92% 00:17:39.762 nvme0n4: ios=3584/4060, merge=0/0, ticks=350/353, in_queue=703, util=89.58% 00:17:39.762 05:15:36 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:39.762 [global] 00:17:39.762 thread=1 00:17:39.762 invalidate=1 00:17:39.762 rw=randwrite 00:17:39.762 time_based=1 00:17:39.762 runtime=1 00:17:39.762 ioengine=libaio 00:17:39.762 direct=1 00:17:39.762 bs=4096 00:17:39.762 iodepth=1 00:17:39.762 norandommap=0 00:17:39.762 numjobs=1 00:17:39.762 00:17:39.762 verify_dump=1 00:17:39.762 verify_backlog=512 00:17:39.762 verify_state_save=0 00:17:39.762 do_verify=1 00:17:39.762 verify=crc32c-intel 00:17:40.102 [job0] 00:17:40.102 filename=/dev/nvme0n1 00:17:40.102 [job1] 00:17:40.102 filename=/dev/nvme0n2 00:17:40.102 [job2] 00:17:40.102 filename=/dev/nvme0n3 00:17:40.102 [job3] 00:17:40.102 filename=/dev/nvme0n4 00:17:40.102 Could not set queue depth (nvme0n1) 00:17:40.102 Could not set queue depth (nvme0n2) 00:17:40.102 Could not set queue depth (nvme0n3) 00:17:40.102 Could not set queue depth (nvme0n4) 00:17:40.102 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:40.102 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:40.102 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:40.102 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:40.102 fio-3.35 00:17:40.102 Starting 4 threads 00:17:41.539 00:17:41.539 job0: (groupid=0, jobs=1): err= 0: pid=288100: Wed Nov 20 05:15:38 2024 00:17:41.539 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:17:41.539 slat (nsec): min=6789, max=48630, avg=10018.48, stdev=1658.27 00:17:41.539 clat (usec): min=72, max=170, avg=89.55, stdev= 8.44 00:17:41.539 lat (usec): min=84, max=181, avg=99.57, stdev= 8.30 00:17:41.539 clat percentiles (usec): 00:17:41.539 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 84], 00:17:41.539 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 90], 00:17:41.539 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 98], 95.00th=[ 104], 00:17:41.539 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 137], 99.95th=[ 141], 00:17:41.539 | 99.99th=[ 172] 00:17:41.539 write: IOPS=5086, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1001msec); 0 zone resets 00:17:41.539 slat (nsec): min=8220, max=82675, avg=11969.41, stdev=2199.04 00:17:41.539 clat (usec): min=68, max=178, avg=88.92, stdev=10.76 00:17:41.539 lat (usec): min=83, max=191, avg=100.89, stdev=10.34 00:17:41.539 clat percentiles (usec): 00:17:41.539 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 82], 00:17:41.539 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 88], 00:17:41.539 | 70.00th=[ 90], 80.00th=[ 93], 90.00th=[ 101], 95.00th=[ 117], 00:17:41.539 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 145], 99.95th=[ 149], 00:17:41.539 | 99.99th=[ 180] 00:17:41.539 bw ( KiB/s): min=20480, max=20480, per=29.79%, avg=20480.00, stdev= 0.00, samples=1 00:17:41.539 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:17:41.539 lat (usec) : 100=91.04%, 250=8.96% 00:17:41.539 cpu : usr=6.90%, sys=15.20%, ctx=9703, majf=0, minf=1 00:17:41.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:41.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.539 issued rwts: total=4608,5092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:41.539 job1: (groupid=0, jobs=1): err= 0: pid=288102: Wed Nov 20 05:15:38 2024 00:17:41.539 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:17:41.539 slat (nsec): min=6465, max=39961, avg=7581.45, stdev=1044.08 00:17:41.539 clat (usec): min=78, max=545, avg=124.86, stdev=17.28 00:17:41.539 lat (usec): min=85, max=552, avg=132.44, stdev=17.36 00:17:41.539 clat percentiles (usec): 00:17:41.539 | 1.00th=[ 97], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 113], 00:17:41.539 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:17:41.539 | 70.00th=[ 130], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 155], 00:17:41.539 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 204], 99.95th=[ 359], 00:17:41.539 | 99.99th=[ 545] 00:17:41.539 write: IOPS=3916, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1001msec); 0 zone resets 00:17:41.539 slat (nsec): min=8424, max=43161, avg=9421.05, stdev=1535.65 00:17:41.539 clat (usec): min=81, max=437, avg=120.47, stdev=13.46 00:17:41.539 lat (usec): min=90, max=447, avg=129.89, stdev=13.66 00:17:41.539 clat percentiles (usec): 00:17:41.539 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:17:41.539 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 122], 00:17:41.539 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 137], 95.00th=[ 145], 00:17:41.539 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 186], 99.95th=[ 219], 00:17:41.539 | 99.99th=[ 437] 00:17:41.539 bw ( KiB/s): min=16384, max=16384, per=23.83%, avg=16384.00, stdev= 0.00, samples=1 00:17:41.539 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:17:41.539 lat (usec) : 100=1.83%, 250=98.13%, 500=0.03%, 750=0.01% 00:17:41.539 cpu : usr=4.00%, sys=9.10%, ctx=7510, majf=0, minf=1 00:17:41.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:41.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.539 issued rwts: total=3584,3920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:41.539 job2: (groupid=0, jobs=1): err= 0: pid=288107: Wed Nov 20 05:15:38 2024 00:17:41.539 read: IOPS=4034, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1001msec) 00:17:41.539 slat (nsec): min=6601, max=26810, avg=7708.09, stdev=890.73 00:17:41.539 clat (usec): min=77, max=203, avg=113.02, stdev=12.31 00:17:41.539 lat (usec): min=85, max=210, avg=120.73, stdev=12.40 00:17:41.539 clat percentiles (usec): 00:17:41.539 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 103], 00:17:41.539 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 118], 00:17:41.539 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 130], 00:17:41.539 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 149], 99.95th=[ 153], 00:17:41.539 | 99.99th=[ 204] 00:17:41.539 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:17:41.539 slat (nsec): min=8606, max=41717, avg=9526.03, stdev=1218.72 00:17:41.539 clat (usec): min=75, max=415, avg=111.36, stdev=14.65 00:17:41.539 lat (usec): min=84, max=425, avg=120.89, stdev=14.76 00:17:41.539 clat percentiles (usec): 00:17:41.539 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 98], 00:17:41.539 | 30.00th=[ 106], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 118], 00:17:41.539 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 130], 00:17:41.539 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 172], 99.95th=[ 194], 00:17:41.539 | 99.99th=[ 416] 00:17:41.539 bw ( KiB/s): min=16976, max=16976, per=24.69%, avg=16976.00, stdev= 0.00, samples=1 00:17:41.539 iops : min= 4244, max= 4244, avg=4244.00, stdev= 0.00, samples=1 00:17:41.539 lat (usec) : 100=20.27%, 250=79.72%, 500=0.01% 00:17:41.539 cpu : usr=3.70%, sys=10.40%, ctx=8135, majf=0, minf=1 00:17:41.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:41.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.540 issued rwts: total=4039,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:41.540 job3: (groupid=0, jobs=1): err= 0: pid=288111: Wed Nov 20 05:15:38 2024 00:17:41.540 read: IOPS=3711, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1001msec) 00:17:41.540 slat (nsec): min=6436, max=26861, avg=7660.70, stdev=805.42 00:17:41.540 clat (usec): min=85, max=319, avg=120.01, stdev=17.13 00:17:41.540 lat (usec): min=92, max=326, avg=127.67, stdev=17.14 00:17:41.540 clat percentiles (usec): 00:17:41.540 | 1.00th=[ 94], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 106], 00:17:41.540 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 120], 00:17:41.540 | 70.00th=[ 126], 80.00th=[ 133], 90.00th=[ 147], 95.00th=[ 153], 00:17:41.540 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 208], 00:17:41.540 | 99.99th=[ 318] 00:17:41.540 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:17:41.540 slat (nsec): min=8381, max=48178, avg=9517.95, stdev=1024.82 00:17:41.540 clat (usec): min=86, max=442, avg=114.82, stdev=16.32 00:17:41.540 lat (usec): min=96, max=451, avg=124.34, stdev=16.40 00:17:41.540 clat percentiles (usec): 00:17:41.540 | 1.00th=[ 93], 5.00th=[ 97], 10.00th=[ 100], 20.00th=[ 103], 00:17:41.540 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 116], 00:17:41.540 | 70.00th=[ 120], 80.00th=[ 125], 90.00th=[ 135], 95.00th=[ 143], 00:17:41.540 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 184], 99.95th=[ 424], 00:17:41.540 | 99.99th=[ 445] 00:17:41.540 bw ( KiB/s): min=16384, max=16384, per=23.83%, avg=16384.00, stdev= 0.00, samples=1 00:17:41.540 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:17:41.540 lat (usec) : 100=8.72%, 250=91.23%, 500=0.05% 00:17:41.540 cpu : usr=5.20%, sys=8.30%, ctx=7811, majf=0, minf=1 00:17:41.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:41.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.540 issued rwts: total=3715,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:41.540 00:17:41.540 Run status group 0 (all jobs): 00:17:41.540 READ: bw=62.2MiB/s (65.2MB/s), 14.0MiB/s-18.0MiB/s (14.7MB/s-18.9MB/s), io=62.3MiB (65.3MB), run=1001-1001msec 00:17:41.540 WRITE: bw=67.1MiB/s (70.4MB/s), 15.3MiB/s-19.9MiB/s (16.0MB/s-20.8MB/s), io=67.2MiB (70.5MB), run=1001-1001msec 00:17:41.540 00:17:41.540 Disk stats (read/write): 00:17:41.540 nvme0n1: ios=4146/4318, merge=0/0, ticks=346/351, in_queue=697, util=86.27% 00:17:41.540 nvme0n2: ios=3072/3366, merge=0/0, ticks=368/377, in_queue=745, util=86.69% 00:17:41.540 nvme0n3: ios=3314/3584, merge=0/0, ticks=365/369, in_queue=734, util=88.85% 00:17:41.540 nvme0n4: ios=3144/3584, merge=0/0, ticks=362/383, in_queue=745, util=89.70% 00:17:41.540 05:15:38 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:41.540 [global] 00:17:41.540 thread=1 00:17:41.540 invalidate=1 00:17:41.540 rw=write 00:17:41.540 time_based=1 00:17:41.540 runtime=1 00:17:41.540 ioengine=libaio 00:17:41.540 direct=1 00:17:41.540 bs=4096 00:17:41.540 iodepth=128 00:17:41.540 norandommap=0 00:17:41.540 numjobs=1 00:17:41.540 00:17:41.540 verify_dump=1 00:17:41.540 verify_backlog=512 00:17:41.540 verify_state_save=0 00:17:41.540 do_verify=1 00:17:41.540 verify=crc32c-intel 00:17:41.540 [job0] 00:17:41.540 filename=/dev/nvme0n1 00:17:41.540 [job1] 00:17:41.540 filename=/dev/nvme0n2 00:17:41.540 [job2] 00:17:41.540 filename=/dev/nvme0n3 00:17:41.540 [job3] 00:17:41.540 filename=/dev/nvme0n4 00:17:41.540 Could not set queue depth (nvme0n1) 00:17:41.540 Could not set queue depth (nvme0n2) 00:17:41.540 Could not set queue depth (nvme0n3) 00:17:41.540 Could not set queue depth (nvme0n4) 00:17:41.832 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:41.833 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:41.833 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:41.833 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:41.833 fio-3.35 00:17:41.833 Starting 4 threads 00:17:43.252 00:17:43.252 job0: (groupid=0, jobs=1): err= 0: pid=288483: Wed Nov 20 05:15:39 2024 00:17:43.252 read: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(34.0MiB/1002msec) 00:17:43.252 slat (nsec): min=1523, max=1701.9k, avg=56575.74, stdev=196215.75 00:17:43.252 clat (usec): min=4733, max=14227, avg=7339.01, stdev=2087.69 00:17:43.252 lat (usec): min=4736, max=14871, avg=7395.59, stdev=2100.10 00:17:43.252 clat percentiles (usec): 00:17:43.252 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6325], 00:17:43.252 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6718], 00:17:43.252 | 70.00th=[ 6783], 80.00th=[ 7111], 90.00th=[12387], 95.00th=[13042], 00:17:43.252 | 99.00th=[13435], 99.50th=[13566], 99.90th=[14091], 99.95th=[14222], 00:17:43.252 | 99.99th=[14222] 00:17:43.252 write: IOPS=8792, BW=34.3MiB/s (36.0MB/s)(34.4MiB/1002msec); 0 zone resets 00:17:43.252 slat (usec): min=2, max=2662, avg=54.96, stdev=192.51 00:17:43.252 clat (usec): min=1225, max=13945, avg=7136.22, stdev=2088.61 00:17:43.252 lat (usec): min=1554, max=14490, avg=7191.18, stdev=2099.60 00:17:43.252 clat percentiles (usec): 00:17:43.252 | 1.00th=[ 4359], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 6128], 00:17:43.252 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6456], 00:17:43.252 | 70.00th=[ 6718], 80.00th=[ 7439], 90.00th=[11863], 95.00th=[12518], 00:17:43.252 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13829], 99.95th=[13960], 00:17:43.252 | 99.99th=[13960] 00:17:43.252 bw ( KiB/s): min=32328, max=37304, per=32.41%, avg=34816.00, stdev=3518.56, samples=2 00:17:43.252 iops : min= 8082, max= 9326, avg=8704.00, stdev=879.64, samples=2 00:17:43.252 lat (msec) : 2=0.15%, 4=0.29%, 10=87.52%, 20=12.04% 00:17:43.252 cpu : usr=3.40%, sys=5.39%, ctx=1420, majf=0, minf=1 00:17:43.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:43.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.252 issued rwts: total=8704,8810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.252 job1: (groupid=0, jobs=1): err= 0: pid=288484: Wed Nov 20 05:15:39 2024 00:17:43.252 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:17:43.252 slat (nsec): min=1538, max=2724.1k, avg=111732.11, stdev=295552.74 00:17:43.252 clat (usec): min=7075, max=17169, avg=14320.87, stdev=1054.19 00:17:43.252 lat (usec): min=7077, max=17172, avg=14432.61, stdev=1054.55 00:17:43.252 clat percentiles (usec): 00:17:43.252 | 1.00th=[11600], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:17:43.252 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[14746], 00:17:43.252 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15270], 95.00th=[15533], 00:17:43.252 | 99.00th=[15926], 99.50th=[16319], 99.90th=[17171], 99.95th=[17171], 00:17:43.252 | 99.99th=[17171] 00:17:43.252 write: IOPS=4625, BW=18.1MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:17:43.252 slat (usec): min=2, max=3092, avg=102.17, stdev=279.06 00:17:43.252 clat (usec): min=2374, max=16734, avg=13138.26, stdev=2129.30 00:17:43.252 lat (usec): min=3340, max=16738, avg=13240.43, stdev=2142.80 00:17:43.252 clat percentiles (usec): 00:17:43.252 | 1.00th=[ 6259], 5.00th=[ 7439], 10.00th=[10945], 20.00th=[12125], 00:17:43.252 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14091], 60.00th=[14222], 00:17:43.252 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14877], 95.00th=[15139], 00:17:43.252 | 99.00th=[15664], 99.50th=[15795], 99.90th=[16319], 99.95th=[16450], 00:17:43.252 | 99.99th=[16712] 00:17:43.252 bw ( KiB/s): min=16384, max=20480, per=17.16%, avg=18432.00, stdev=2896.31, samples=2 00:17:43.252 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:17:43.252 lat (msec) : 4=0.05%, 10=4.60%, 20=95.35% 00:17:43.252 cpu : usr=2.40%, sys=2.30%, ctx=1230, majf=0, minf=1 00:17:43.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:43.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.252 issued rwts: total=4608,4639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.252 job2: (groupid=0, jobs=1): err= 0: pid=288485: Wed Nov 20 05:15:39 2024 00:17:43.252 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:17:43.252 slat (nsec): min=1476, max=2048.5k, avg=99051.50, stdev=286692.38 00:17:43.252 clat (usec): min=6830, max=16597, avg=12814.71, stdev=3035.29 00:17:43.252 lat (usec): min=6914, max=16638, avg=12913.76, stdev=3056.70 00:17:43.252 clat percentiles (usec): 00:17:43.252 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8225], 20.00th=[ 8455], 00:17:43.252 | 30.00th=[ 9503], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:17:43.252 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15270], 95.00th=[15533], 00:17:43.252 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16450], 99.95th=[16581], 00:17:43.252 | 99.99th=[16581] 00:17:43.252 write: IOPS=5263, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1003msec); 0 zone resets 00:17:43.252 slat (usec): min=2, max=3781, avg=90.89, stdev=273.34 00:17:43.252 clat (usec): min=2376, max=17172, avg=11628.13, stdev=3088.25 00:17:43.252 lat (usec): min=3321, max=17176, avg=11719.03, stdev=3107.70 00:17:43.252 clat percentiles (usec): 00:17:43.252 | 1.00th=[ 6718], 5.00th=[ 7635], 10.00th=[ 7767], 20.00th=[ 7963], 00:17:43.252 | 30.00th=[ 8160], 40.00th=[ 9896], 50.00th=[13566], 60.00th=[14091], 00:17:43.252 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14615], 95.00th=[14877], 00:17:43.252 | 99.00th=[15401], 99.50th=[15533], 99.90th=[16319], 99.95th=[17171], 00:17:43.252 | 99.99th=[17171] 00:17:43.252 bw ( KiB/s): min=16640, max=24576, per=19.19%, avg=20608.00, stdev=5611.60, samples=2 00:17:43.252 iops : min= 4160, max= 6144, avg=5152.00, stdev=1402.90, samples=2 00:17:43.252 lat (msec) : 4=0.13%, 10=35.00%, 20=64.87% 00:17:43.252 cpu : usr=2.00%, sys=3.39%, ctx=1120, majf=0, minf=2 00:17:43.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:43.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.252 issued rwts: total=5120,5279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.252 job3: (groupid=0, jobs=1): err= 0: pid=288486: Wed Nov 20 05:15:39 2024 00:17:43.252 read: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec) 00:17:43.252 slat (nsec): min=1532, max=1698.0k, avg=61272.30, stdev=221810.40 00:17:43.252 clat (usec): min=2244, max=9734, avg=7845.66, stdev=668.57 00:17:43.252 lat (usec): min=2246, max=9739, avg=7906.94, stdev=678.56 00:17:43.252 clat percentiles (usec): 00:17:43.252 | 1.00th=[ 5538], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7439], 00:17:43.252 | 30.00th=[ 7570], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:17:43.252 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8848], 00:17:43.252 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[ 9503], 99.95th=[ 9634], 00:17:43.252 | 99.99th=[ 9765] 00:17:43.252 write: IOPS=8189, BW=32.0MiB/s (33.5MB/s)(32.1MiB/1002msec); 0 zone resets 00:17:43.252 slat (usec): min=2, max=1700, avg=58.43, stdev=208.45 00:17:43.252 clat (usec): min=720, max=9828, avg=7618.30, stdev=576.07 00:17:43.252 lat (usec): min=1743, max=9834, avg=7676.74, stdev=586.42 00:17:43.252 clat percentiles (usec): 00:17:43.252 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7046], 20.00th=[ 7177], 00:17:43.252 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:17:43.252 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:17:43.252 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9372], 99.95th=[ 9634], 00:17:43.253 | 99.99th=[ 9765] 00:17:43.253 bw ( KiB/s): min=32768, max=32768, per=30.51%, avg=32768.00, stdev= 0.00, samples=2 00:17:43.253 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:17:43.253 lat (usec) : 750=0.01% 00:17:43.253 lat (msec) : 2=0.05%, 4=0.20%, 10=99.75% 00:17:43.253 cpu : usr=2.60%, sys=5.09%, ctx=1130, majf=0, minf=1 00:17:43.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:43.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.253 issued rwts: total=8192,8206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.253 00:17:43.253 Run status group 0 (all jobs): 00:17:43.253 READ: bw=104MiB/s (109MB/s), 17.9MiB/s-33.9MiB/s (18.8MB/s-35.6MB/s), io=104MiB (109MB), run=1002-1003msec 00:17:43.253 WRITE: bw=105MiB/s (110MB/s), 18.1MiB/s-34.3MiB/s (18.9MB/s-36.0MB/s), io=105MiB (110MB), run=1002-1003msec 00:17:43.253 00:17:43.253 Disk stats (read/write): 00:17:43.253 nvme0n1: ios=7218/7464, merge=0/0, ticks=16907/16949, in_queue=33856, util=86.57% 00:17:43.253 nvme0n2: ios=3831/4096, merge=0/0, ticks=18101/17371, in_queue=35472, util=86.70% 00:17:43.253 nvme0n3: ios=4402/4608, merge=0/0, ticks=18085/17298, in_queue=35383, util=88.87% 00:17:43.253 nvme0n4: ios=6805/7168, merge=0/0, ticks=13512/13437, in_queue=26949, util=89.61% 00:17:43.253 05:15:39 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:43.253 [global] 00:17:43.253 thread=1 00:17:43.253 invalidate=1 00:17:43.253 rw=randwrite 00:17:43.253 time_based=1 00:17:43.253 runtime=1 00:17:43.253 ioengine=libaio 00:17:43.253 direct=1 00:17:43.253 bs=4096 00:17:43.253 iodepth=128 00:17:43.253 norandommap=0 00:17:43.253 numjobs=1 00:17:43.253 00:17:43.253 verify_dump=1 00:17:43.253 verify_backlog=512 00:17:43.253 verify_state_save=0 00:17:43.253 do_verify=1 00:17:43.253 verify=crc32c-intel 00:17:43.253 [job0] 00:17:43.253 filename=/dev/nvme0n1 00:17:43.253 [job1] 00:17:43.253 filename=/dev/nvme0n2 00:17:43.253 [job2] 00:17:43.253 filename=/dev/nvme0n3 00:17:43.253 [job3] 00:17:43.253 filename=/dev/nvme0n4 00:17:43.253 Could not set queue depth (nvme0n1) 00:17:43.253 Could not set queue depth (nvme0n2) 00:17:43.253 Could not set queue depth (nvme0n3) 00:17:43.253 Could not set queue depth (nvme0n4) 00:17:43.253 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:43.253 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:43.253 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:43.253 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:43.253 fio-3.35 00:17:43.253 Starting 4 threads 00:17:44.678 00:17:44.679 job0: (groupid=0, jobs=1): err= 0: pid=288868: Wed Nov 20 05:15:41 2024 00:17:44.679 read: IOPS=9535, BW=37.2MiB/s (39.1MB/s)(37.3MiB/1001msec) 00:17:44.679 slat (nsec): min=1458, max=3268.5k, avg=52200.07, stdev=191576.43 00:17:44.679 clat (usec): min=360, max=11194, avg=6740.93, stdev=576.82 00:17:44.679 lat (usec): min=1115, max=11200, avg=6793.13, stdev=567.11 00:17:44.679 clat percentiles (usec): 00:17:44.679 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6456], 00:17:44.679 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6783], 00:17:44.679 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:17:44.679 | 99.00th=[ 8291], 99.50th=[ 8979], 99.90th=[ 9896], 99.95th=[10290], 00:17:44.679 | 99.99th=[11207] 00:17:44.679 write: IOPS=9718, BW=38.0MiB/s (39.8MB/s)(38.0MiB/1001msec); 0 zone resets 00:17:44.679 slat (nsec): min=1954, max=2214.7k, avg=49083.23, stdev=173685.32 00:17:44.679 clat (usec): min=3834, max=10519, avg=6410.28, stdev=392.90 00:17:44.679 lat (usec): min=4349, max=10526, avg=6459.36, stdev=382.31 00:17:44.679 clat percentiles (usec): 00:17:44.679 | 1.00th=[ 5604], 5.00th=[ 5866], 10.00th=[ 5997], 20.00th=[ 6128], 00:17:44.679 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6456], 00:17:44.679 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7046], 00:17:44.679 | 99.00th=[ 7373], 99.50th=[ 8291], 99.90th=[ 9372], 99.95th=[ 9765], 00:17:44.679 | 99.99th=[10552] 00:17:44.679 bw ( KiB/s): min=39616, max=39616, per=36.61%, avg=39616.00, stdev= 0.00, samples=1 00:17:44.679 iops : min= 9904, max= 9904, avg=9904.00, stdev= 0.00, samples=1 00:17:44.679 lat (usec) : 500=0.01% 00:17:44.679 lat (msec) : 2=0.13%, 4=0.17%, 10=99.65%, 20=0.04% 00:17:44.679 cpu : usr=3.80%, sys=5.70%, ctx=1326, majf=0, minf=1 00:17:44.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:44.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:44.679 issued rwts: total=9545,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:44.679 job1: (groupid=0, jobs=1): err= 0: pid=288869: Wed Nov 20 05:15:41 2024 00:17:44.679 read: IOPS=4404, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1003msec) 00:17:44.679 slat (nsec): min=1481, max=2058.1k, avg=112198.04, stdev=342476.79 00:17:44.679 clat (usec): min=1607, max=16191, avg=14274.69, stdev=1416.12 00:17:44.679 lat (usec): min=2521, max=16193, avg=14386.88, stdev=1378.59 00:17:44.679 clat percentiles (usec): 00:17:44.679 | 1.00th=[ 6783], 5.00th=[12387], 10.00th=[13698], 20.00th=[14091], 00:17:44.679 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14615], 00:17:44.679 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15008], 95.00th=[15139], 00:17:44.679 | 99.00th=[15533], 99.50th=[15533], 99.90th=[16188], 99.95th=[16188], 00:17:44.679 | 99.99th=[16188] 00:17:44.679 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:17:44.679 slat (usec): min=2, max=2010, avg=106.89, stdev=321.29 00:17:44.679 clat (usec): min=10513, max=14775, avg=13818.71, stdev=445.65 00:17:44.679 lat (usec): min=10551, max=14785, avg=13925.60, stdev=312.88 00:17:44.679 clat percentiles (usec): 00:17:44.679 | 1.00th=[12125], 5.00th=[13042], 10.00th=[13304], 20.00th=[13566], 00:17:44.679 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13960], 60.00th=[13960], 00:17:44.679 | 70.00th=[14091], 80.00th=[14091], 90.00th=[14222], 95.00th=[14353], 00:17:44.679 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:17:44.679 | 99.99th=[14746] 00:17:44.679 bw ( KiB/s): min=17648, max=19216, per=17.03%, avg=18432.00, stdev=1108.74, samples=2 00:17:44.679 iops : min= 4412, max= 4804, avg=4608.00, stdev=277.19, samples=2 00:17:44.679 lat (msec) : 2=0.01%, 4=0.22%, 10=0.79%, 20=98.98% 00:17:44.679 cpu : usr=2.00%, sys=2.99%, ctx=959, majf=0, minf=1 00:17:44.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:44.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:44.679 issued rwts: total=4418,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:44.679 job2: (groupid=0, jobs=1): err= 0: pid=288870: Wed Nov 20 05:15:41 2024 00:17:44.679 read: IOPS=4388, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1003msec) 00:17:44.679 slat (nsec): min=1399, max=2850.0k, avg=112423.78, stdev=399446.97 00:17:44.679 clat (usec): min=2407, max=17042, avg=14347.42, stdev=1261.31 00:17:44.679 lat (usec): min=4045, max=17046, avg=14459.85, stdev=1200.09 00:17:44.679 clat percentiles (usec): 00:17:44.679 | 1.00th=[ 7701], 5.00th=[12649], 10.00th=[13566], 20.00th=[14222], 00:17:44.679 | 30.00th=[14484], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:17:44.679 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15139], 95.00th=[15270], 00:17:44.679 | 99.00th=[15533], 99.50th=[15533], 99.90th=[16909], 99.95th=[16909], 00:17:44.679 | 99.99th=[17171] 00:17:44.679 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:17:44.679 slat (nsec): min=1981, max=2020.0k, avg=106534.40, stdev=364798.19 00:17:44.679 clat (usec): min=10467, max=14960, avg=13801.55, stdev=490.22 00:17:44.679 lat (usec): min=10512, max=14964, avg=13908.08, stdev=331.93 00:17:44.679 clat percentiles (usec): 00:17:44.679 | 1.00th=[11994], 5.00th=[12649], 10.00th=[13304], 20.00th=[13566], 00:17:44.679 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13960], 60.00th=[13960], 00:17:44.679 | 70.00th=[14091], 80.00th=[14091], 90.00th=[14222], 95.00th=[14353], 00:17:44.679 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[15008], 00:17:44.679 | 99.99th=[15008] 00:17:44.679 bw ( KiB/s): min=17688, max=19176, per=17.03%, avg=18432.00, stdev=1052.17, samples=2 00:17:44.679 iops : min= 4422, max= 4794, avg=4608.00, stdev=263.04, samples=2 00:17:44.679 lat (msec) : 4=0.01%, 10=0.78%, 20=99.21% 00:17:44.679 cpu : usr=2.10%, sys=3.39%, ctx=731, majf=0, minf=1 00:17:44.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:44.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:44.679 issued rwts: total=4402,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:44.679 job3: (groupid=0, jobs=1): err= 0: pid=288871: Wed Nov 20 05:15:41 2024 00:17:44.679 read: IOPS=8070, BW=31.5MiB/s (33.1MB/s)(31.6MiB/1003msec) 00:17:44.679 slat (nsec): min=1393, max=2159.6k, avg=61829.33, stdev=230297.81 00:17:44.679 clat (usec): min=2019, max=11107, avg=8007.30, stdev=536.43 00:17:44.679 lat (usec): min=2955, max=11460, avg=8069.13, stdev=514.78 00:17:44.679 clat percentiles (usec): 00:17:44.679 | 1.00th=[ 6718], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 7898], 00:17:44.679 | 30.00th=[ 7963], 40.00th=[ 8029], 50.00th=[ 8029], 60.00th=[ 8094], 00:17:44.679 | 70.00th=[ 8160], 80.00th=[ 8225], 90.00th=[ 8291], 95.00th=[ 8356], 00:17:44.679 | 99.00th=[10159], 99.50th=[10290], 99.90th=[11076], 99.95th=[11076], 00:17:44.679 | 99.99th=[11076] 00:17:44.679 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec); 0 zone resets 00:17:44.679 slat (usec): min=2, max=2141, avg=58.76, stdev=216.99 00:17:44.679 clat (usec): min=6124, max=10589, avg=7597.56, stdev=362.65 00:17:44.679 lat (usec): min=6128, max=10601, avg=7656.32, stdev=340.86 00:17:44.679 clat percentiles (usec): 00:17:44.679 | 1.00th=[ 6456], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7373], 00:17:44.679 | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7701], 00:17:44.679 | 70.00th=[ 7767], 80.00th=[ 7832], 90.00th=[ 7963], 95.00th=[ 8094], 00:17:44.679 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 9503], 99.95th=[ 9503], 00:17:44.679 | 99.99th=[10552] 00:17:44.679 bw ( KiB/s): min=32768, max=32768, per=30.28%, avg=32768.00, stdev= 0.00, samples=2 00:17:44.679 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:17:44.679 lat (msec) : 4=0.20%, 10=99.25%, 20=0.55% 00:17:44.679 cpu : usr=2.00%, sys=5.19%, ctx=1020, majf=0, minf=1 00:17:44.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:44.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:44.679 issued rwts: total=8095,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:44.679 00:17:44.679 Run status group 0 (all jobs): 00:17:44.679 READ: bw=103MiB/s (108MB/s), 17.1MiB/s-37.2MiB/s (18.0MB/s-39.1MB/s), io=103MiB (108MB), run=1001-1003msec 00:17:44.679 WRITE: bw=106MiB/s (111MB/s), 17.9MiB/s-38.0MiB/s (18.8MB/s-39.8MB/s), io=106MiB (111MB), run=1001-1003msec 00:17:44.679 00:17:44.679 Disk stats (read/write): 00:17:44.679 nvme0n1: ios=8225/8192, merge=0/0, ticks=13950/12893, in_queue=26843, util=86.17% 00:17:44.679 nvme0n2: ios=3584/4077, merge=0/0, ticks=12897/13995, in_queue=26892, util=86.57% 00:17:44.679 nvme0n3: ios=3584/4060, merge=0/0, ticks=12772/13820, in_queue=26592, util=88.74% 00:17:44.679 nvme0n4: ios=6656/7144, merge=0/0, ticks=26197/26582, in_queue=52779, util=89.58% 00:17:44.679 05:15:41 -- target/fio.sh@55 -- # sync 00:17:44.679 05:15:41 -- target/fio.sh@59 -- # fio_pid=289107 00:17:44.679 05:15:41 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:44.679 05:15:41 -- target/fio.sh@61 -- # sleep 3 00:17:44.679 [global] 00:17:44.679 thread=1 00:17:44.679 invalidate=1 00:17:44.679 rw=read 00:17:44.679 time_based=1 00:17:44.679 runtime=10 00:17:44.679 ioengine=libaio 00:17:44.679 direct=1 00:17:44.679 bs=4096 00:17:44.679 iodepth=1 00:17:44.679 norandommap=1 00:17:44.679 numjobs=1 00:17:44.679 00:17:44.679 [job0] 00:17:44.679 filename=/dev/nvme0n1 00:17:44.679 [job1] 00:17:44.679 filename=/dev/nvme0n2 00:17:44.679 [job2] 00:17:44.679 filename=/dev/nvme0n3 00:17:44.679 [job3] 00:17:44.679 filename=/dev/nvme0n4 00:17:44.679 Could not set queue depth (nvme0n1) 00:17:44.679 Could not set queue depth (nvme0n2) 00:17:44.679 Could not set queue depth (nvme0n3) 00:17:44.679 Could not set queue depth (nvme0n4) 00:17:44.946 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:44.946 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:44.946 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:44.946 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:44.946 fio-3.35 00:17:44.946 Starting 4 threads 00:17:47.575 05:15:44 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:47.843 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=79491072, buflen=4096 00:17:47.844 fio: pid=289251, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:47.844 05:15:44 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:47.844 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=86515712, buflen=4096 00:17:47.844 fio: pid=289250, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:47.844 05:15:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:47.844 05:15:44 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:48.108 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45719552, buflen=4096 00:17:48.108 fio: pid=289247, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:48.108 05:15:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:48.108 05:15:44 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:48.395 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54751232, buflen=4096 00:17:48.395 fio: pid=289248, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:48.395 05:15:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:48.395 05:15:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:48.395 00:17:48.395 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=289247: Wed Nov 20 05:15:45 2024 00:17:48.395 read: IOPS=8795, BW=34.4MiB/s (36.0MB/s)(108MiB/3132msec) 00:17:48.395 slat (usec): min=6, max=11895, avg= 8.83, stdev=118.44 00:17:48.395 clat (usec): min=57, max=778, avg=102.95, stdev=17.25 00:17:48.395 lat (usec): min=66, max=12013, avg=111.78, stdev=119.78 00:17:48.395 clat percentiles (usec): 00:17:48.395 | 1.00th=[ 72], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:17:48.395 | 30.00th=[ 89], 40.00th=[ 95], 50.00th=[ 109], 60.00th=[ 113], 00:17:48.395 | 70.00th=[ 117], 80.00th=[ 119], 90.00th=[ 123], 95.00th=[ 126], 00:17:48.395 | 99.00th=[ 133], 99.50th=[ 135], 99.90th=[ 145], 99.95th=[ 155], 00:17:48.395 | 99.99th=[ 208] 00:17:48.395 bw ( KiB/s): min=32168, max=41864, per=30.42%, avg=35286.67, stdev=4522.14, samples=6 00:17:48.395 iops : min= 8042, max=10466, avg=8821.67, stdev=1130.53, samples=6 00:17:48.395 lat (usec) : 100=43.29%, 250=56.70%, 750=0.01%, 1000=0.01% 00:17:48.395 cpu : usr=2.91%, sys=10.41%, ctx=27551, majf=0, minf=1 00:17:48.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.395 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.395 issued rwts: total=27547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:48.396 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=289248: Wed Nov 20 05:15:45 2024 00:17:48.396 read: IOPS=8820, BW=34.5MiB/s (36.1MB/s)(116MiB/3373msec) 00:17:48.396 slat (usec): min=3, max=15784, avg= 9.57, stdev=156.07 00:17:48.396 clat (usec): min=54, max=19858, avg=101.76, stdev=126.03 00:17:48.396 lat (usec): min=64, max=19887, avg=111.34, stdev=200.58 00:17:48.396 clat percentiles (usec): 00:17:48.396 | 1.00th=[ 64], 5.00th=[ 70], 10.00th=[ 78], 20.00th=[ 84], 00:17:48.396 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 103], 60.00th=[ 112], 00:17:48.396 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 123], 95.00th=[ 126], 00:17:48.396 | 99.00th=[ 133], 99.50th=[ 135], 99.90th=[ 147], 99.95th=[ 182], 00:17:48.396 | 99.99th=[ 988] 00:17:48.396 bw ( KiB/s): min=32056, max=42008, per=30.00%, avg=34800.17, stdev=4044.39, samples=6 00:17:48.396 iops : min= 8014, max=10502, avg=8700.00, stdev=1011.07, samples=6 00:17:48.396 lat (usec) : 100=47.69%, 250=52.27%, 500=0.01%, 750=0.01%, 1000=0.01% 00:17:48.396 lat (msec) : 10=0.01%, 20=0.01% 00:17:48.396 cpu : usr=3.08%, sys=10.23%, ctx=29760, majf=0, minf=2 00:17:48.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.396 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.396 issued rwts: total=29752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:48.396 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=289250: Wed Nov 20 05:15:45 2024 00:17:48.396 read: IOPS=7359, BW=28.7MiB/s (30.1MB/s)(82.5MiB/2870msec) 00:17:48.396 slat (usec): min=6, max=7896, avg= 8.35, stdev=76.10 00:17:48.396 clat (usec): min=64, max=487, avg=125.97, stdev=13.60 00:17:48.396 lat (usec): min=71, max=7997, avg=134.33, stdev=77.08 00:17:48.396 clat percentiles (usec): 00:17:48.396 | 1.00th=[ 84], 5.00th=[ 92], 10.00th=[ 115], 20.00th=[ 122], 00:17:48.396 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 130], 00:17:48.396 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:17:48.396 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 176], 99.95th=[ 180], 00:17:48.396 | 99.99th=[ 235] 00:17:48.396 bw ( KiB/s): min=28424, max=29352, per=25.08%, avg=29094.40, stdev=381.68, samples=5 00:17:48.396 iops : min= 7106, max= 7338, avg=7273.60, stdev=95.42, samples=5 00:17:48.396 lat (usec) : 100=7.60%, 250=92.39%, 500=0.01% 00:17:48.396 cpu : usr=2.86%, sys=8.54%, ctx=21126, majf=0, minf=2 00:17:48.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.396 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.396 issued rwts: total=21123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:48.396 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=289251: Wed Nov 20 05:15:45 2024 00:17:48.396 read: IOPS=7228, BW=28.2MiB/s (29.6MB/s)(75.8MiB/2685msec) 00:17:48.396 slat (nsec): min=6536, max=40835, avg=7733.00, stdev=909.53 00:17:48.396 clat (usec): min=88, max=630, avg=129.04, stdev= 8.93 00:17:48.396 lat (usec): min=96, max=637, avg=136.77, stdev= 8.96 00:17:48.396 clat percentiles (usec): 00:17:48.396 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 124], 00:17:48.396 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 129], 60.00th=[ 131], 00:17:48.396 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:17:48.396 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 176], 99.95th=[ 184], 00:17:48.396 | 99.99th=[ 412] 00:17:48.396 bw ( KiB/s): min=28416, max=29360, per=25.07%, avg=29089.60, stdev=385.02, samples=5 00:17:48.396 iops : min= 7104, max= 7340, avg=7272.40, stdev=96.25, samples=5 00:17:48.396 lat (usec) : 100=0.10%, 250=99.88%, 500=0.01%, 750=0.01% 00:17:48.396 cpu : usr=1.86%, sys=9.46%, ctx=19408, majf=0, minf=2 00:17:48.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.396 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.396 issued rwts: total=19408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:48.396 00:17:48.396 Run status group 0 (all jobs): 00:17:48.396 READ: bw=113MiB/s (119MB/s), 28.2MiB/s-34.5MiB/s (29.6MB/s-36.1MB/s), io=382MiB (401MB), run=2685-3373msec 00:17:48.396 00:17:48.396 Disk stats (read/write): 00:17:48.396 nvme0n1: ios=27432/0, merge=0/0, ticks=2694/0, in_queue=2694, util=94.45% 00:17:48.396 nvme0n2: ios=29751/0, merge=0/0, ticks=2862/0, in_queue=2862, util=94.62% 00:17:48.396 nvme0n3: ios=21010/0, merge=0/0, ticks=2534/0, in_queue=2534, util=96.04% 00:17:48.396 nvme0n4: ios=18861/0, merge=0/0, ticks=2292/0, in_queue=2292, util=96.44% 00:17:48.690 05:15:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:48.690 05:15:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:48.690 05:15:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:48.974 05:15:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:48.974 05:15:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:48.974 05:15:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:49.257 05:15:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:49.257 05:15:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:49.545 05:15:46 -- target/fio.sh@69 -- # fio_status=0 00:17:49.545 05:15:46 -- target/fio.sh@70 -- # wait 289107 00:17:49.545 05:15:46 -- target/fio.sh@70 -- # fio_status=4 00:17:49.545 05:15:46 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.159 05:15:46 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:50.159 05:15:46 -- common/autotest_common.sh@1208 -- # local i=0 00:17:50.159 05:15:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:50.159 05:15:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.159 05:15:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:50.159 05:15:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.159 05:15:46 -- common/autotest_common.sh@1220 -- # return 0 00:17:50.159 05:15:46 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:50.159 05:15:46 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:50.159 nvmf hotplug test: fio failed as expected 00:17:50.159 05:15:46 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.430 05:15:47 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:50.430 05:15:47 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:50.430 05:15:47 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:50.430 05:15:47 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:50.430 05:15:47 -- target/fio.sh@91 -- # nvmftestfini 00:17:50.430 05:15:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:50.430 05:15:47 -- nvmf/common.sh@116 -- # sync 00:17:50.430 05:15:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:50.430 05:15:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:50.430 05:15:47 -- nvmf/common.sh@119 -- # set +e 00:17:50.430 05:15:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:50.430 05:15:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:50.430 rmmod nvme_rdma 00:17:50.430 rmmod nvme_fabrics 00:17:50.430 05:15:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:50.430 05:15:47 -- nvmf/common.sh@123 -- # set -e 00:17:50.430 05:15:47 -- nvmf/common.sh@124 -- # return 0 00:17:50.430 05:15:47 -- nvmf/common.sh@477 -- # '[' -n 286371 ']' 00:17:50.430 05:15:47 -- nvmf/common.sh@478 -- # killprocess 286371 00:17:50.430 05:15:47 -- common/autotest_common.sh@936 -- # '[' -z 286371 ']' 00:17:50.430 05:15:47 -- common/autotest_common.sh@940 -- # kill -0 286371 00:17:50.430 05:15:47 -- common/autotest_common.sh@941 -- # uname 00:17:50.430 05:15:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.430 05:15:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 286371 00:17:50.695 05:15:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.695 05:15:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.695 05:15:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 286371' 00:17:50.695 killing process with pid 286371 00:17:50.695 05:15:47 -- common/autotest_common.sh@955 -- # kill 286371 00:17:50.695 05:15:47 -- common/autotest_common.sh@960 -- # wait 286371 00:17:50.695 05:15:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.695 05:15:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:50.695 00:17:50.695 real 0m24.396s 00:17:50.695 user 1m48.959s 00:17:50.695 sys 0m8.284s 00:17:50.695 05:15:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.695 05:15:47 -- common/autotest_common.sh@10 -- # set +x 00:17:50.695 ************************************ 00:17:50.695 END TEST nvmf_fio_target 00:17:50.695 ************************************ 00:17:50.960 05:15:47 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:17:50.960 05:15:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:50.960 05:15:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.960 05:15:47 -- common/autotest_common.sh@10 -- # set +x 00:17:50.960 ************************************ 00:17:50.960 START TEST nvmf_bdevio 00:17:50.960 ************************************ 00:17:50.960 05:15:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:17:50.960 * Looking for test storage... 00:17:50.960 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:50.960 05:15:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:50.960 05:15:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:50.960 05:15:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:50.960 05:15:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:50.960 05:15:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:50.960 05:15:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.960 05:15:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.960 05:15:47 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.960 05:15:47 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.960 05:15:47 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.960 05:15:47 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.960 05:15:47 -- scripts/common.sh@337 -- # local 'op=<' 00:17:50.960 05:15:47 -- scripts/common.sh@339 -- # ver1_l=2 00:17:50.960 05:15:47 -- scripts/common.sh@340 -- # ver2_l=1 00:17:50.960 05:15:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.960 05:15:47 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.960 05:15:47 -- scripts/common.sh@344 -- # : 1 00:17:50.960 05:15:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.960 05:15:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.960 05:15:47 -- scripts/common.sh@364 -- # decimal 1 00:17:50.960 05:15:47 -- scripts/common.sh@352 -- # local d=1 00:17:50.960 05:15:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.960 05:15:47 -- scripts/common.sh@354 -- # echo 1 00:17:50.960 05:15:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.960 05:15:47 -- scripts/common.sh@365 -- # decimal 2 00:17:50.960 05:15:47 -- scripts/common.sh@352 -- # local d=2 00:17:50.960 05:15:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.960 05:15:47 -- scripts/common.sh@354 -- # echo 2 00:17:50.960 05:15:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:50.960 05:15:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.960 05:15:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.960 05:15:47 -- scripts/common.sh@367 -- # return 0 00:17:50.960 05:15:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.960 05:15:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:50.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.960 --rc genhtml_branch_coverage=1 00:17:50.960 --rc genhtml_function_coverage=1 00:17:50.960 --rc genhtml_legend=1 00:17:50.960 --rc geninfo_all_blocks=1 00:17:50.960 --rc geninfo_unexecuted_blocks=1 00:17:50.960 00:17:50.960 ' 00:17:50.960 05:15:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:50.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.960 --rc genhtml_branch_coverage=1 00:17:50.960 --rc genhtml_function_coverage=1 00:17:50.960 --rc genhtml_legend=1 00:17:50.960 --rc geninfo_all_blocks=1 00:17:50.960 --rc geninfo_unexecuted_blocks=1 00:17:50.960 00:17:50.960 ' 00:17:50.960 05:15:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:50.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.960 --rc genhtml_branch_coverage=1 00:17:50.960 --rc genhtml_function_coverage=1 00:17:50.960 --rc genhtml_legend=1 00:17:50.960 --rc geninfo_all_blocks=1 00:17:50.960 --rc geninfo_unexecuted_blocks=1 00:17:50.960 00:17:50.960 ' 00:17:50.960 05:15:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:50.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.960 --rc genhtml_branch_coverage=1 00:17:50.960 --rc genhtml_function_coverage=1 00:17:50.960 --rc genhtml_legend=1 00:17:50.960 --rc geninfo_all_blocks=1 00:17:50.960 --rc geninfo_unexecuted_blocks=1 00:17:50.960 00:17:50.960 ' 00:17:50.960 05:15:47 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.960 05:15:47 -- nvmf/common.sh@7 -- # uname -s 00:17:50.960 05:15:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.960 05:15:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.960 05:15:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.960 05:15:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.960 05:15:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.960 05:15:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.960 05:15:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.960 05:15:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.960 05:15:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.960 05:15:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.960 05:15:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:50.960 05:15:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:50.960 05:15:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.960 05:15:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.960 05:15:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:50.960 05:15:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:50.960 05:15:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.960 05:15:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.960 05:15:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.960 05:15:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.960 05:15:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.960 05:15:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.960 05:15:47 -- paths/export.sh@5 -- # export PATH 00:17:50.960 05:15:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.960 05:15:47 -- nvmf/common.sh@46 -- # : 0 00:17:50.960 05:15:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.960 05:15:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.960 05:15:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.960 05:15:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.960 05:15:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.960 05:15:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.960 05:15:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.960 05:15:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.960 05:15:47 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.960 05:15:47 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.960 05:15:47 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:50.960 05:15:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:50.960 05:15:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.960 05:15:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.960 05:15:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.960 05:15:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.960 05:15:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.960 05:15:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.960 05:15:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.960 05:15:47 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:17:50.960 05:15:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:50.960 05:15:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:50.960 05:15:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.366 05:15:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:56.366 05:15:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:56.366 05:15:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:56.366 05:15:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:56.366 05:15:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:56.366 05:15:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:56.366 05:15:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:56.366 05:15:52 -- nvmf/common.sh@294 -- # net_devs=() 00:17:56.366 05:15:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:56.366 05:15:52 -- nvmf/common.sh@295 -- # e810=() 00:17:56.366 05:15:52 -- nvmf/common.sh@295 -- # local -ga e810 00:17:56.366 05:15:52 -- nvmf/common.sh@296 -- # x722=() 00:17:56.366 05:15:52 -- nvmf/common.sh@296 -- # local -ga x722 00:17:56.366 05:15:52 -- nvmf/common.sh@297 -- # mlx=() 00:17:56.366 05:15:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:56.366 05:15:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.366 05:15:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:56.366 05:15:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:56.366 05:15:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:56.366 05:15:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:56.366 05:15:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:56.366 05:15:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.366 05:15:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:56.366 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:56.366 05:15:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:56.366 05:15:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.366 05:15:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:56.366 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:56.366 05:15:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:56.366 05:15:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:56.366 05:15:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:17:56.366 05:15:52 -- nvmf/common.sh@376 -- # modinfo irdma 00:17:56.366 05:15:52 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:17:56.366 05:15:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.366 05:15:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.366 05:15:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.366 05:15:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.366 05:15:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:56.366 Found net devices under 0000:af:00.0: cvl_0_0 00:17:56.366 05:15:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.366 05:15:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.366 05:15:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.366 05:15:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.366 05:15:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.366 05:15:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:56.366 Found net devices under 0000:af:00.1: cvl_0_1 00:17:56.366 05:15:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.366 05:15:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:56.366 05:15:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:56.366 05:15:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:56.366 05:15:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:56.366 05:15:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:56.366 05:15:52 -- nvmf/common.sh@57 -- # uname 00:17:56.366 05:15:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:56.366 05:15:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:56.366 05:15:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:56.366 05:15:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:56.366 05:15:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:56.366 05:15:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:56.366 05:15:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:56.366 05:15:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:56.366 05:15:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:56.366 05:15:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:56.366 05:15:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:56.366 05:15:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:56.366 05:15:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:56.366 05:15:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:56.366 05:15:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:56.367 05:15:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:56.367 05:15:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:56.367 05:15:52 -- nvmf/common.sh@104 -- # continue 2 00:17:56.367 05:15:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:56.367 05:15:52 -- nvmf/common.sh@104 -- # continue 2 00:17:56.367 05:15:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:56.367 05:15:52 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:17:56.367 05:15:52 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:56.367 05:15:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:56.367 05:15:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:17:56.367 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:56.367 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:56.367 altname enp175s0f0np0 00:17:56.367 altname ens801f0np0 00:17:56.367 inet 192.168.100.8/24 scope global cvl_0_0 00:17:56.367 valid_lft forever preferred_lft forever 00:17:56.367 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:56.367 valid_lft forever preferred_lft forever 00:17:56.367 05:15:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:56.367 05:15:52 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:17:56.367 05:15:52 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:56.367 05:15:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:56.367 05:15:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:17:56.367 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:56.367 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:56.367 altname enp175s0f1np1 00:17:56.367 altname ens801f1np1 00:17:56.367 inet 192.168.100.9/24 scope global cvl_0_1 00:17:56.367 valid_lft forever preferred_lft forever 00:17:56.367 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:56.367 valid_lft forever preferred_lft forever 00:17:56.367 05:15:52 -- nvmf/common.sh@410 -- # return 0 00:17:56.367 05:15:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:56.367 05:15:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:56.367 05:15:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:56.367 05:15:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:56.367 05:15:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:56.367 05:15:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:56.367 05:15:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:56.367 05:15:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:56.367 05:15:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:56.367 05:15:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:17:56.367 05:15:52 -- nvmf/common.sh@104 -- # continue 2 00:17:56.367 05:15:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.367 05:15:52 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:56.367 05:15:52 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:17:56.367 05:15:52 -- nvmf/common.sh@104 -- # continue 2 00:17:56.367 05:15:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:56.367 05:15:52 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:17:56.367 05:15:52 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:56.367 05:15:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:56.367 05:15:52 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:17:56.367 05:15:52 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:56.367 05:15:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:56.367 05:15:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:56.367 192.168.100.9' 00:17:56.367 05:15:52 -- nvmf/common.sh@445 -- # head -n 1 00:17:56.367 05:15:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:56.367 192.168.100.9' 00:17:56.367 05:15:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:56.367 05:15:52 -- nvmf/common.sh@446 -- # tail -n +2 00:17:56.367 05:15:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:56.367 192.168.100.9' 00:17:56.367 05:15:52 -- nvmf/common.sh@446 -- # head -n 1 00:17:56.367 05:15:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:56.367 05:15:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:56.367 05:15:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:56.367 05:15:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:56.367 05:15:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:56.367 05:15:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:56.367 05:15:53 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:56.367 05:15:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:56.367 05:15:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:56.367 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:56.367 05:15:53 -- nvmf/common.sh@469 -- # nvmfpid=293325 00:17:56.367 05:15:53 -- nvmf/common.sh@470 -- # waitforlisten 293325 00:17:56.367 05:15:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:56.367 05:15:53 -- common/autotest_common.sh@829 -- # '[' -z 293325 ']' 00:17:56.367 05:15:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.367 05:15:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.367 05:15:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.367 05:15:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.367 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:56.367 [2024-11-20 05:15:53.062168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:56.367 [2024-11-20 05:15:53.062217] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.367 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.367 [2024-11-20 05:15:53.119715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.639 [2024-11-20 05:15:53.193167] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:56.639 [2024-11-20 05:15:53.193290] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.639 [2024-11-20 05:15:53.193298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.639 [2024-11-20 05:15:53.193304] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.639 [2024-11-20 05:15:53.193450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:56.639 [2024-11-20 05:15:53.193558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:56.639 [2024-11-20 05:15:53.193666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.639 [2024-11-20 05:15:53.193667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:57.213 05:15:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.213 05:15:53 -- common/autotest_common.sh@862 -- # return 0 00:17:57.213 05:15:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:57.213 05:15:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.213 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.213 05:15:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.213 05:15:53 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:57.213 05:15:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.213 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.213 [2024-11-20 05:15:53.936193] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1fed9e0/0x1fed020) succeed. 00:17:57.213 [2024-11-20 05:15:53.945124] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1feed50/0x1fed5a0) succeed. 00:17:57.213 [2024-11-20 05:15:53.945145] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:57.213 05:15:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.213 05:15:53 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:57.213 05:15:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.213 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.213 Malloc0 00:17:57.213 05:15:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.213 05:15:53 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.213 05:15:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.213 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.213 05:15:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.213 05:15:53 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.213 05:15:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.213 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.213 05:15:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.213 05:15:53 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:57.213 05:15:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.213 05:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.213 [2024-11-20 05:15:53.991976] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:57.213 05:15:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.213 05:15:53 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:57.213 05:15:53 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:57.213 05:15:53 -- nvmf/common.sh@520 -- # config=() 00:17:57.213 05:15:53 -- nvmf/common.sh@520 -- # local subsystem config 00:17:57.213 05:15:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:57.213 05:15:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:57.213 { 00:17:57.213 "params": { 00:17:57.213 "name": "Nvme$subsystem", 00:17:57.213 "trtype": "$TEST_TRANSPORT", 00:17:57.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:57.213 "adrfam": "ipv4", 00:17:57.213 "trsvcid": "$NVMF_PORT", 00:17:57.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:57.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:57.213 "hdgst": ${hdgst:-false}, 00:17:57.213 "ddgst": ${ddgst:-false} 00:17:57.213 }, 00:17:57.213 "method": "bdev_nvme_attach_controller" 00:17:57.213 } 00:17:57.213 EOF 00:17:57.213 )") 00:17:57.213 05:15:53 -- nvmf/common.sh@542 -- # cat 00:17:57.213 05:15:54 -- nvmf/common.sh@544 -- # jq . 00:17:57.213 05:15:54 -- nvmf/common.sh@545 -- # IFS=, 00:17:57.213 05:15:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:57.213 "params": { 00:17:57.213 "name": "Nvme1", 00:17:57.213 "trtype": "rdma", 00:17:57.213 "traddr": "192.168.100.8", 00:17:57.213 "adrfam": "ipv4", 00:17:57.213 "trsvcid": "4420", 00:17:57.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.213 "hdgst": false, 00:17:57.213 "ddgst": false 00:17:57.213 }, 00:17:57.213 "method": "bdev_nvme_attach_controller" 00:17:57.213 }' 00:17:57.213 [2024-11-20 05:15:54.037840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:57.213 [2024-11-20 05:15:54.037884] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293536 ] 00:17:57.477 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.477 [2024-11-20 05:15:54.093148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.477 [2024-11-20 05:15:54.164258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.477 [2024-11-20 05:15:54.164354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.477 [2024-11-20 05:15:54.164356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.748 [2024-11-20 05:15:54.322004] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:57.748 [2024-11-20 05:15:54.322034] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:57.748 I/O targets: 00:17:57.748 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:57.748 00:17:57.748 00:17:57.748 CUnit - A unit testing framework for C - Version 2.1-3 00:17:57.748 http://cunit.sourceforge.net/ 00:17:57.748 00:17:57.748 00:17:57.748 Suite: bdevio tests on: Nvme1n1 00:17:57.748 Test: blockdev write read block ...passed 00:17:57.748 Test: blockdev write zeroes read block ...passed 00:17:57.748 Test: blockdev write zeroes read no split ...passed 00:17:57.748 Test: blockdev write zeroes read split ...passed 00:17:57.748 Test: blockdev write zeroes read split partial ...passed 00:17:57.748 Test: blockdev reset ...[2024-11-20 05:15:54.350434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:57.749 [2024-11-20 05:15:54.374406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:57.749 [2024-11-20 05:15:54.402454] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:57.749 passed 00:17:57.749 Test: blockdev write read 8 blocks ...passed 00:17:57.749 Test: blockdev write read size > 128k ...passed 00:17:57.749 Test: blockdev write read invalid size ...passed 00:17:57.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:57.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:57.749 Test: blockdev write read max offset ...passed 00:17:57.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:57.749 Test: blockdev writev readv 8 blocks ...passed 00:17:57.749 Test: blockdev writev readv 30 x 1block ...passed 00:17:57.749 Test: blockdev writev readv block ...passed 00:17:57.749 Test: blockdev writev readv size > 128k ...passed 00:17:57.749 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:57.749 Test: blockdev comparev and writev ...[2024-11-20 05:15:54.405713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.749 [2024-11-20 05:15:54.405739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.405749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.749 [2024-11-20 05:15:54.405756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.405933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.749 [2024-11-20 05:15:54.405942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.405950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.749 [2024-11-20 05:15:54.405957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.406135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.749 [2024-11-20 05:15:54.406144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.406152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.749 [2024-11-20 05:15:54.406162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.406326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.749 [2024-11-20 05:15:54.406334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.406341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.749 [2024-11-20 05:15:54.406348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:57.749 passed 00:17:57.749 Test: blockdev nvme passthru rw ...passed 00:17:57.749 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:15:54.406641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:57.749 [2024-11-20 05:15:54.406651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.406700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:57.749 [2024-11-20 05:15:54.406709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.406759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:57.749 [2024-11-20 05:15:54.406768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:57.749 [2024-11-20 05:15:54.406824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:57.749 [2024-11-20 05:15:54.406832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:57.749 passed 00:17:57.749 Test: blockdev nvme admin passthru ...passed 00:17:57.749 Test: blockdev copy ...passed 00:17:57.749 00:17:57.749 Run Summary: Type Total Ran Passed Failed Inactive 00:17:57.749 suites 1 1 n/a 0 0 00:17:57.749 tests 23 23 23 0 0 00:17:57.749 asserts 152 152 152 0 n/a 00:17:57.749 00:17:57.749 Elapsed time = 0.177 seconds 00:17:58.027 05:15:54 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.027 05:15:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.027 05:15:54 -- common/autotest_common.sh@10 -- # set +x 00:17:58.027 05:15:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.027 05:15:54 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:58.027 05:15:54 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:58.027 05:15:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:58.027 05:15:54 -- nvmf/common.sh@116 -- # sync 00:17:58.027 05:15:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:58.027 05:15:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:58.027 05:15:54 -- nvmf/common.sh@119 -- # set +e 00:17:58.027 05:15:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:58.027 05:15:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:58.027 rmmod nvme_rdma 00:17:58.027 rmmod nvme_fabrics 00:17:58.027 05:15:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:58.027 05:15:54 -- nvmf/common.sh@123 -- # set -e 00:17:58.027 05:15:54 -- nvmf/common.sh@124 -- # return 0 00:17:58.027 05:15:54 -- nvmf/common.sh@477 -- # '[' -n 293325 ']' 00:17:58.027 05:15:54 -- nvmf/common.sh@478 -- # killprocess 293325 00:17:58.027 05:15:54 -- common/autotest_common.sh@936 -- # '[' -z 293325 ']' 00:17:58.027 05:15:54 -- common/autotest_common.sh@940 -- # kill -0 293325 00:17:58.027 05:15:54 -- common/autotest_common.sh@941 -- # uname 00:17:58.027 05:15:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.027 05:15:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 293325 00:17:58.027 05:15:54 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:58.027 05:15:54 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:58.027 05:15:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 293325' 00:17:58.027 killing process with pid 293325 00:17:58.027 05:15:54 -- common/autotest_common.sh@955 -- # kill 293325 00:17:58.027 05:15:54 -- common/autotest_common.sh@960 -- # wait 293325 00:17:58.306 05:15:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:58.306 05:15:54 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:58.306 00:17:58.306 real 0m7.413s 00:17:58.306 user 0m9.756s 00:17:58.306 sys 0m4.455s 00:17:58.306 05:15:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:58.306 05:15:54 -- common/autotest_common.sh@10 -- # set +x 00:17:58.306 ************************************ 00:17:58.306 END TEST nvmf_bdevio 00:17:58.306 ************************************ 00:17:58.306 05:15:55 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:17:58.306 05:15:55 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:58.306 05:15:55 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:17:58.306 05:15:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:58.306 05:15:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.306 05:15:55 -- common/autotest_common.sh@10 -- # set +x 00:17:58.306 ************************************ 00:17:58.306 START TEST nvmf_fuzz 00:17:58.306 ************************************ 00:17:58.306 05:15:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:17:58.306 * Looking for test storage... 00:17:58.306 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:58.306 05:15:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:58.306 05:15:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:58.306 05:15:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:58.588 05:15:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:58.588 05:15:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:58.588 05:15:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:58.588 05:15:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:58.588 05:15:55 -- scripts/common.sh@335 -- # IFS=.-: 00:17:58.588 05:15:55 -- scripts/common.sh@335 -- # read -ra ver1 00:17:58.588 05:15:55 -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.588 05:15:55 -- scripts/common.sh@336 -- # read -ra ver2 00:17:58.588 05:15:55 -- scripts/common.sh@337 -- # local 'op=<' 00:17:58.588 05:15:55 -- scripts/common.sh@339 -- # ver1_l=2 00:17:58.588 05:15:55 -- scripts/common.sh@340 -- # ver2_l=1 00:17:58.588 05:15:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:58.588 05:15:55 -- scripts/common.sh@343 -- # case "$op" in 00:17:58.588 05:15:55 -- scripts/common.sh@344 -- # : 1 00:17:58.588 05:15:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:58.588 05:15:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.588 05:15:55 -- scripts/common.sh@364 -- # decimal 1 00:17:58.588 05:15:55 -- scripts/common.sh@352 -- # local d=1 00:17:58.588 05:15:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.588 05:15:55 -- scripts/common.sh@354 -- # echo 1 00:17:58.588 05:15:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:58.588 05:15:55 -- scripts/common.sh@365 -- # decimal 2 00:17:58.588 05:15:55 -- scripts/common.sh@352 -- # local d=2 00:17:58.588 05:15:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.588 05:15:55 -- scripts/common.sh@354 -- # echo 2 00:17:58.588 05:15:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:58.588 05:15:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:58.588 05:15:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:58.588 05:15:55 -- scripts/common.sh@367 -- # return 0 00:17:58.588 05:15:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.588 05:15:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:58.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.588 --rc genhtml_branch_coverage=1 00:17:58.588 --rc genhtml_function_coverage=1 00:17:58.588 --rc genhtml_legend=1 00:17:58.588 --rc geninfo_all_blocks=1 00:17:58.588 --rc geninfo_unexecuted_blocks=1 00:17:58.588 00:17:58.588 ' 00:17:58.588 05:15:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:58.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.588 --rc genhtml_branch_coverage=1 00:17:58.588 --rc genhtml_function_coverage=1 00:17:58.588 --rc genhtml_legend=1 00:17:58.588 --rc geninfo_all_blocks=1 00:17:58.588 --rc geninfo_unexecuted_blocks=1 00:17:58.588 00:17:58.588 ' 00:17:58.588 05:15:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:58.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.588 --rc genhtml_branch_coverage=1 00:17:58.588 --rc genhtml_function_coverage=1 00:17:58.588 --rc genhtml_legend=1 00:17:58.588 --rc geninfo_all_blocks=1 00:17:58.588 --rc geninfo_unexecuted_blocks=1 00:17:58.588 00:17:58.588 ' 00:17:58.588 05:15:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:58.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.588 --rc genhtml_branch_coverage=1 00:17:58.588 --rc genhtml_function_coverage=1 00:17:58.588 --rc genhtml_legend=1 00:17:58.588 --rc geninfo_all_blocks=1 00:17:58.588 --rc geninfo_unexecuted_blocks=1 00:17:58.588 00:17:58.588 ' 00:17:58.588 05:15:55 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.588 05:15:55 -- nvmf/common.sh@7 -- # uname -s 00:17:58.588 05:15:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.588 05:15:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.588 05:15:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.589 05:15:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.589 05:15:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.589 05:15:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.589 05:15:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.589 05:15:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.589 05:15:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.589 05:15:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.589 05:15:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:58.589 05:15:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:58.589 05:15:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.589 05:15:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.589 05:15:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:58.589 05:15:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:58.589 05:15:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.589 05:15:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.589 05:15:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.589 05:15:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.589 05:15:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.589 05:15:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.589 05:15:55 -- paths/export.sh@5 -- # export PATH 00:17:58.589 05:15:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.589 05:15:55 -- nvmf/common.sh@46 -- # : 0 00:17:58.589 05:15:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:58.589 05:15:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:58.589 05:15:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:58.589 05:15:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.589 05:15:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.589 05:15:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:58.589 05:15:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:58.589 05:15:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:58.589 05:15:55 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:58.589 05:15:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:58.589 05:15:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.589 05:15:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:58.589 05:15:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:58.589 05:15:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:58.589 05:15:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.589 05:15:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.589 05:15:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.589 05:15:55 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:17:58.589 05:15:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:58.589 05:15:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:58.589 05:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:03.984 05:16:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:03.984 05:16:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:03.984 05:16:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:03.984 05:16:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:03.984 05:16:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:03.984 05:16:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:03.985 05:16:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:03.985 05:16:00 -- nvmf/common.sh@294 -- # net_devs=() 00:18:03.985 05:16:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:03.985 05:16:00 -- nvmf/common.sh@295 -- # e810=() 00:18:03.985 05:16:00 -- nvmf/common.sh@295 -- # local -ga e810 00:18:03.985 05:16:00 -- nvmf/common.sh@296 -- # x722=() 00:18:03.985 05:16:00 -- nvmf/common.sh@296 -- # local -ga x722 00:18:03.985 05:16:00 -- nvmf/common.sh@297 -- # mlx=() 00:18:03.985 05:16:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:03.985 05:16:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.985 05:16:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:03.985 05:16:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:03.985 05:16:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:03.985 05:16:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:03.985 05:16:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:03.985 05:16:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:03.985 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:03.985 05:16:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:03.985 05:16:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:03.985 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:03.985 05:16:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:03.985 05:16:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:03.985 05:16:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:18:03.985 05:16:00 -- nvmf/common.sh@376 -- # modinfo irdma 00:18:03.985 05:16:00 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:18:03.985 05:16:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.985 05:16:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:03.985 05:16:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.985 05:16:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:03.985 Found net devices under 0000:af:00.0: cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.985 05:16:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.985 05:16:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:03.985 05:16:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.985 05:16:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:03.985 Found net devices under 0000:af:00.1: cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.985 05:16:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:03.985 05:16:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:03.985 05:16:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:03.985 05:16:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:03.985 05:16:00 -- nvmf/common.sh@57 -- # uname 00:18:03.985 05:16:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:03.985 05:16:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:03.985 05:16:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:03.985 05:16:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:03.985 05:16:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:03.985 05:16:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:03.985 05:16:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:03.985 05:16:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:03.985 05:16:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:03.985 05:16:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:03.985 05:16:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:03.985 05:16:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:03.985 05:16:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:03.985 05:16:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:03.985 05:16:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:03.985 05:16:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:03.985 05:16:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@104 -- # continue 2 00:18:03.985 05:16:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@104 -- # continue 2 00:18:03.985 05:16:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:03.985 05:16:00 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:03.985 05:16:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:03.985 05:16:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:18:03.985 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:03.985 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:18:03.985 altname enp175s0f0np0 00:18:03.985 altname ens801f0np0 00:18:03.985 inet 192.168.100.8/24 scope global cvl_0_0 00:18:03.985 valid_lft forever preferred_lft forever 00:18:03.985 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:18:03.985 valid_lft forever preferred_lft forever 00:18:03.985 05:16:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:03.985 05:16:00 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:03.985 05:16:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:03.985 05:16:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:18:03.985 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:03.985 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:18:03.985 altname enp175s0f1np1 00:18:03.985 altname ens801f1np1 00:18:03.985 inet 192.168.100.9/24 scope global cvl_0_1 00:18:03.985 valid_lft forever preferred_lft forever 00:18:03.985 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:18:03.985 valid_lft forever preferred_lft forever 00:18:03.985 05:16:00 -- nvmf/common.sh@410 -- # return 0 00:18:03.985 05:16:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:03.985 05:16:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:03.985 05:16:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:03.985 05:16:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:03.985 05:16:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:03.985 05:16:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:03.985 05:16:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:03.985 05:16:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:03.985 05:16:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:03.985 05:16:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@104 -- # continue 2 00:18:03.985 05:16:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:03.985 05:16:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:03.985 05:16:00 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@104 -- # continue 2 00:18:03.985 05:16:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:03.985 05:16:00 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:03.985 05:16:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:03.985 05:16:00 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:03.985 05:16:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:03.985 05:16:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:03.985 192.168.100.9' 00:18:03.985 05:16:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:03.985 192.168.100.9' 00:18:03.985 05:16:00 -- nvmf/common.sh@445 -- # head -n 1 00:18:03.985 05:16:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:03.985 05:16:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:03.985 192.168.100.9' 00:18:03.985 05:16:00 -- nvmf/common.sh@446 -- # tail -n +2 00:18:03.985 05:16:00 -- nvmf/common.sh@446 -- # head -n 1 00:18:03.985 05:16:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:03.985 05:16:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:03.985 05:16:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:03.985 05:16:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:03.985 05:16:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:03.985 05:16:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:03.985 05:16:00 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=296829 00:18:03.985 05:16:00 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:03.985 05:16:00 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:03.985 05:16:00 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 296829 00:18:03.985 05:16:00 -- common/autotest_common.sh@829 -- # '[' -z 296829 ']' 00:18:03.985 05:16:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.985 05:16:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.985 05:16:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.985 05:16:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.985 05:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:04.599 05:16:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.599 05:16:01 -- common/autotest_common.sh@862 -- # return 0 00:18:04.599 05:16:01 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:04.599 05:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.599 05:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:04.887 05:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.887 05:16:01 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:04.887 05:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.888 05:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:04.888 Malloc0 00:18:04.888 05:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.888 05:16:01 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:04.888 05:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.888 05:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:04.888 05:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.888 05:16:01 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:04.888 05:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.888 05:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:04.888 05:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.888 05:16:01 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:04.888 05:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.888 05:16:01 -- common/autotest_common.sh@10 -- # set +x 00:18:04.888 05:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.888 05:16:01 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:18:04.888 05:16:01 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:18:37.292 Fuzzing completed. Shutting down the fuzz application 00:18:37.292 00:18:37.292 Dumping successful admin opcodes: 00:18:37.292 8, 9, 10, 24, 00:18:37.292 Dumping successful io opcodes: 00:18:37.292 0, 9, 00:18:37.292 NS: 0x200003af1f00 I/O qp, Total commands completed: 1321566, total successful commands: 7785, random_seed: 259845568 00:18:37.292 NS: 0x200003af1f00 admin qp, Total commands completed: 166592, total successful commands: 1353, random_seed: 1470345024 00:18:37.292 05:16:31 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:37.292 Fuzzing completed. Shutting down the fuzz application 00:18:37.292 00:18:37.292 Dumping successful admin opcodes: 00:18:37.292 24, 00:18:37.292 Dumping successful io opcodes: 00:18:37.292 00:18:37.292 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 412969462 00:18:37.292 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 413039696 00:18:37.292 05:16:33 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.292 05:16:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.292 05:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:37.292 05:16:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.292 05:16:33 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:37.292 05:16:33 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:37.292 05:16:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:37.292 05:16:33 -- nvmf/common.sh@116 -- # sync 00:18:37.292 05:16:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:37.292 05:16:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:37.292 05:16:33 -- nvmf/common.sh@119 -- # set +e 00:18:37.292 05:16:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:37.292 05:16:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:37.292 rmmod nvme_rdma 00:18:37.292 rmmod nvme_fabrics 00:18:37.292 05:16:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:37.292 05:16:33 -- nvmf/common.sh@123 -- # set -e 00:18:37.292 05:16:33 -- nvmf/common.sh@124 -- # return 0 00:18:37.292 05:16:33 -- nvmf/common.sh@477 -- # '[' -n 296829 ']' 00:18:37.292 05:16:33 -- nvmf/common.sh@478 -- # killprocess 296829 00:18:37.292 05:16:33 -- common/autotest_common.sh@936 -- # '[' -z 296829 ']' 00:18:37.292 05:16:33 -- common/autotest_common.sh@940 -- # kill -0 296829 00:18:37.292 05:16:33 -- common/autotest_common.sh@941 -- # uname 00:18:37.292 05:16:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:37.292 05:16:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 296829 00:18:37.292 05:16:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:37.292 05:16:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:37.292 05:16:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 296829' 00:18:37.292 killing process with pid 296829 00:18:37.292 05:16:33 -- common/autotest_common.sh@955 -- # kill 296829 00:18:37.292 05:16:33 -- common/autotest_common.sh@960 -- # wait 296829 00:18:37.292 05:16:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:37.292 05:16:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:37.292 05:16:33 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:18:37.292 00:18:37.292 real 0m38.604s 00:18:37.292 user 0m53.510s 00:18:37.292 sys 0m16.490s 00:18:37.292 05:16:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:37.292 05:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:37.292 ************************************ 00:18:37.292 END TEST nvmf_fuzz 00:18:37.292 ************************************ 00:18:37.292 05:16:33 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:18:37.292 05:16:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:37.292 05:16:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:37.292 05:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:37.292 ************************************ 00:18:37.292 START TEST nvmf_multiconnection 00:18:37.292 ************************************ 00:18:37.292 05:16:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:18:37.292 * Looking for test storage... 00:18:37.292 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:18:37.292 05:16:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:37.292 05:16:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:37.292 05:16:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:37.292 05:16:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:37.292 05:16:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:37.292 05:16:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:37.292 05:16:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:37.292 05:16:33 -- scripts/common.sh@335 -- # IFS=.-: 00:18:37.292 05:16:33 -- scripts/common.sh@335 -- # read -ra ver1 00:18:37.292 05:16:33 -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.292 05:16:33 -- scripts/common.sh@336 -- # read -ra ver2 00:18:37.292 05:16:33 -- scripts/common.sh@337 -- # local 'op=<' 00:18:37.292 05:16:33 -- scripts/common.sh@339 -- # ver1_l=2 00:18:37.292 05:16:33 -- scripts/common.sh@340 -- # ver2_l=1 00:18:37.292 05:16:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:37.292 05:16:33 -- scripts/common.sh@343 -- # case "$op" in 00:18:37.293 05:16:33 -- scripts/common.sh@344 -- # : 1 00:18:37.293 05:16:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:37.293 05:16:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.293 05:16:33 -- scripts/common.sh@364 -- # decimal 1 00:18:37.293 05:16:33 -- scripts/common.sh@352 -- # local d=1 00:18:37.293 05:16:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.293 05:16:33 -- scripts/common.sh@354 -- # echo 1 00:18:37.293 05:16:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:37.293 05:16:33 -- scripts/common.sh@365 -- # decimal 2 00:18:37.293 05:16:33 -- scripts/common.sh@352 -- # local d=2 00:18:37.293 05:16:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.293 05:16:33 -- scripts/common.sh@354 -- # echo 2 00:18:37.293 05:16:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:37.293 05:16:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:37.293 05:16:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:37.293 05:16:33 -- scripts/common.sh@367 -- # return 0 00:18:37.293 05:16:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.293 05:16:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:37.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.293 --rc genhtml_branch_coverage=1 00:18:37.293 --rc genhtml_function_coverage=1 00:18:37.293 --rc genhtml_legend=1 00:18:37.293 --rc geninfo_all_blocks=1 00:18:37.293 --rc geninfo_unexecuted_blocks=1 00:18:37.293 00:18:37.293 ' 00:18:37.293 05:16:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:37.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.293 --rc genhtml_branch_coverage=1 00:18:37.293 --rc genhtml_function_coverage=1 00:18:37.293 --rc genhtml_legend=1 00:18:37.293 --rc geninfo_all_blocks=1 00:18:37.293 --rc geninfo_unexecuted_blocks=1 00:18:37.293 00:18:37.293 ' 00:18:37.293 05:16:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:37.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.293 --rc genhtml_branch_coverage=1 00:18:37.293 --rc genhtml_function_coverage=1 00:18:37.293 --rc genhtml_legend=1 00:18:37.293 --rc geninfo_all_blocks=1 00:18:37.293 --rc geninfo_unexecuted_blocks=1 00:18:37.293 00:18:37.293 ' 00:18:37.293 05:16:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:37.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.293 --rc genhtml_branch_coverage=1 00:18:37.293 --rc genhtml_function_coverage=1 00:18:37.293 --rc genhtml_legend=1 00:18:37.293 --rc geninfo_all_blocks=1 00:18:37.293 --rc geninfo_unexecuted_blocks=1 00:18:37.293 00:18:37.293 ' 00:18:37.293 05:16:33 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.293 05:16:33 -- nvmf/common.sh@7 -- # uname -s 00:18:37.293 05:16:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.293 05:16:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.293 05:16:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.293 05:16:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.293 05:16:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.293 05:16:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.293 05:16:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.293 05:16:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.293 05:16:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.293 05:16:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.293 05:16:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:37.293 05:16:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:37.293 05:16:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.293 05:16:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.293 05:16:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:37.293 05:16:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:18:37.293 05:16:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.293 05:16:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.293 05:16:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.293 05:16:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.293 05:16:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.293 05:16:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.293 05:16:33 -- paths/export.sh@5 -- # export PATH 00:18:37.293 05:16:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.293 05:16:33 -- nvmf/common.sh@46 -- # : 0 00:18:37.293 05:16:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:37.293 05:16:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:37.293 05:16:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:37.293 05:16:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.293 05:16:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.293 05:16:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:37.293 05:16:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:37.293 05:16:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:37.293 05:16:33 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.293 05:16:33 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.293 05:16:33 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:37.293 05:16:33 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:37.293 05:16:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:37.293 05:16:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.293 05:16:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:37.293 05:16:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:37.293 05:16:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:37.293 05:16:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.293 05:16:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.293 05:16:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.293 05:16:33 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:37.293 05:16:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:37.293 05:16:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:37.293 05:16:33 -- common/autotest_common.sh@10 -- # set +x 00:18:42.568 05:16:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:42.568 05:16:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:42.568 05:16:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:42.568 05:16:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:42.568 05:16:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:42.568 05:16:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:42.568 05:16:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:42.568 05:16:38 -- nvmf/common.sh@294 -- # net_devs=() 00:18:42.568 05:16:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:42.568 05:16:38 -- nvmf/common.sh@295 -- # e810=() 00:18:42.568 05:16:38 -- nvmf/common.sh@295 -- # local -ga e810 00:18:42.568 05:16:38 -- nvmf/common.sh@296 -- # x722=() 00:18:42.568 05:16:38 -- nvmf/common.sh@296 -- # local -ga x722 00:18:42.568 05:16:38 -- nvmf/common.sh@297 -- # mlx=() 00:18:42.568 05:16:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:42.568 05:16:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.568 05:16:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:42.568 05:16:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:42.568 05:16:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:42.568 05:16:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:42.568 05:16:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:42.568 05:16:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:42.568 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:42.568 05:16:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:42.568 05:16:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:42.568 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:42.568 05:16:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:42.568 05:16:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:42.568 05:16:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:18:42.568 05:16:38 -- nvmf/common.sh@376 -- # modinfo irdma 00:18:42.568 05:16:38 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:18:42.568 05:16:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.568 05:16:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:42.568 05:16:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.568 05:16:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:42.568 Found net devices under 0000:af:00.0: cvl_0_0 00:18:42.568 05:16:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.568 05:16:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.568 05:16:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:42.568 05:16:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.568 05:16:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:42.568 Found net devices under 0000:af:00.1: cvl_0_1 00:18:42.568 05:16:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.568 05:16:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:42.568 05:16:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:42.568 05:16:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:42.568 05:16:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:42.568 05:16:38 -- nvmf/common.sh@57 -- # uname 00:18:42.568 05:16:38 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:42.568 05:16:38 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:42.568 05:16:38 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:42.568 05:16:38 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:42.568 05:16:38 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:42.568 05:16:38 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:42.568 05:16:38 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:42.568 05:16:38 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:42.568 05:16:38 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:42.568 05:16:38 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:42.568 05:16:38 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:42.568 05:16:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:42.568 05:16:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:42.568 05:16:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:42.568 05:16:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:42.568 05:16:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:42.568 05:16:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:18:42.568 05:16:38 -- nvmf/common.sh@104 -- # continue 2 00:18:42.568 05:16:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.568 05:16:38 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:42.568 05:16:38 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:18:42.568 05:16:38 -- nvmf/common.sh@104 -- # continue 2 00:18:42.568 05:16:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:42.568 05:16:39 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:18:42.568 05:16:39 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:18:42.568 05:16:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:18:42.568 05:16:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:42.568 05:16:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:42.568 05:16:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:42.568 05:16:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:42.568 05:16:39 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:18:42.568 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:42.568 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:18:42.568 altname enp175s0f0np0 00:18:42.568 altname ens801f0np0 00:18:42.568 inet 192.168.100.8/24 scope global cvl_0_0 00:18:42.568 valid_lft forever preferred_lft forever 00:18:42.568 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:18:42.568 valid_lft forever preferred_lft forever 00:18:42.568 05:16:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:42.568 05:16:39 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:18:42.568 05:16:39 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:18:42.568 05:16:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:42.568 05:16:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:42.568 05:16:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:18:42.568 05:16:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:42.568 05:16:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:42.568 05:16:39 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:18:42.568 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:42.568 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:18:42.568 altname enp175s0f1np1 00:18:42.568 altname ens801f1np1 00:18:42.568 inet 192.168.100.9/24 scope global cvl_0_1 00:18:42.568 valid_lft forever preferred_lft forever 00:18:42.568 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:18:42.568 valid_lft forever preferred_lft forever 00:18:42.569 05:16:39 -- nvmf/common.sh@410 -- # return 0 00:18:42.569 05:16:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:42.569 05:16:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:42.569 05:16:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:42.569 05:16:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:42.569 05:16:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:42.569 05:16:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:42.569 05:16:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:42.569 05:16:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:42.569 05:16:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:42.569 05:16:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:42.569 05:16:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:42.569 05:16:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.569 05:16:39 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:42.569 05:16:39 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:18:42.569 05:16:39 -- nvmf/common.sh@104 -- # continue 2 00:18:42.569 05:16:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:42.569 05:16:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.569 05:16:39 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:42.569 05:16:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.569 05:16:39 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:42.569 05:16:39 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:18:42.569 05:16:39 -- nvmf/common.sh@104 -- # continue 2 00:18:42.569 05:16:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:42.569 05:16:39 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:18:42.569 05:16:39 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:18:42.569 05:16:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:42.569 05:16:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:18:42.569 05:16:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:42.569 05:16:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:42.569 05:16:39 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:18:42.569 05:16:39 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:18:42.569 05:16:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:42.569 05:16:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:18:42.569 05:16:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:42.569 05:16:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:42.569 192.168.100.9' 00:18:42.569 05:16:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:42.569 192.168.100.9' 00:18:42.569 05:16:39 -- nvmf/common.sh@445 -- # head -n 1 00:18:42.569 05:16:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:42.569 05:16:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:42.569 192.168.100.9' 00:18:42.569 05:16:39 -- nvmf/common.sh@446 -- # tail -n +2 00:18:42.569 05:16:39 -- nvmf/common.sh@446 -- # head -n 1 00:18:42.569 05:16:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:42.569 05:16:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:42.569 05:16:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:42.569 05:16:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:42.569 05:16:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:42.569 05:16:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:42.569 05:16:39 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:42.569 05:16:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:42.569 05:16:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:42.569 05:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:42.569 05:16:39 -- nvmf/common.sh@469 -- # nvmfpid=305208 00:18:42.569 05:16:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.569 05:16:39 -- nvmf/common.sh@470 -- # waitforlisten 305208 00:18:42.569 05:16:39 -- common/autotest_common.sh@829 -- # '[' -z 305208 ']' 00:18:42.569 05:16:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.569 05:16:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.569 05:16:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.569 05:16:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.569 05:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:42.569 [2024-11-20 05:16:39.151578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:42.569 [2024-11-20 05:16:39.151624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.569 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.569 [2024-11-20 05:16:39.208567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.569 [2024-11-20 05:16:39.284366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:42.569 [2024-11-20 05:16:39.284472] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.569 [2024-11-20 05:16:39.284480] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.569 [2024-11-20 05:16:39.284485] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.569 [2024-11-20 05:16:39.284527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.569 [2024-11-20 05:16:39.284648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.569 [2024-11-20 05:16:39.284712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.569 [2024-11-20 05:16:39.284713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.507 05:16:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.507 05:16:39 -- common/autotest_common.sh@862 -- # return 0 00:18:43.507 05:16:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:43.507 05:16:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:43.507 05:16:39 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 05:16:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.507 05:16:40 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 [2024-11-20 05:16:40.030091] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1dee100/0x1ded740) succeed. 00:18:43.507 [2024-11-20 05:16:40.039085] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1def470/0x1dedcc0) succeed. 00:18:43.507 [2024-11-20 05:16:40.039108] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:43.507 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.507 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 Malloc1 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 [2024-11-20 05:16:40.102225] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.507 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 Malloc2 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:18:43.507 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.507 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.507 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.507 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.508 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 Malloc3 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.508 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 Malloc4 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.508 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 Malloc5 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.508 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 Malloc6 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.508 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:43.508 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.508 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.767 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 Malloc7 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.767 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 Malloc8 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.767 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 Malloc9 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.767 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 Malloc10 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:43.767 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.767 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.767 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:18:43.768 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.768 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 05:16:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.768 05:16:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:43.768 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.768 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 Malloc11 00:18:43.768 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 05:16:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:43.768 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.768 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 05:16:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:43.768 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.768 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 05:16:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:18:43.768 05:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.768 05:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 05:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.768 05:16:40 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:43.768 05:16:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.768 05:16:40 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:44.026 05:16:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:44.026 05:16:40 -- common/autotest_common.sh@1187 -- # local i=0 00:18:44.026 05:16:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.026 05:16:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:44.026 05:16:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:46.561 05:16:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:46.561 05:16:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:46.561 05:16:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:46.561 05:16:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:46.561 05:16:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.561 05:16:42 -- common/autotest_common.sh@1197 -- # return 0 00:18:46.561 05:16:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.561 05:16:42 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:18:46.561 05:16:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:46.561 05:16:43 -- common/autotest_common.sh@1187 -- # local i=0 00:18:46.561 05:16:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.561 05:16:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:46.561 05:16:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:48.465 05:16:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:48.465 05:16:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:48.466 05:16:45 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:48.466 05:16:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:48.466 05:16:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.466 05:16:45 -- common/autotest_common.sh@1197 -- # return 0 00:18:48.466 05:16:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.466 05:16:45 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:18:48.725 05:16:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:48.725 05:16:45 -- common/autotest_common.sh@1187 -- # local i=0 00:18:48.725 05:16:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.725 05:16:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:48.725 05:16:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:50.632 05:16:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:50.632 05:16:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:50.632 05:16:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:50.632 05:16:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:50.632 05:16:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.632 05:16:47 -- common/autotest_common.sh@1197 -- # return 0 00:18:50.632 05:16:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:50.632 05:16:47 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:18:50.892 05:16:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:50.892 05:16:47 -- common/autotest_common.sh@1187 -- # local i=0 00:18:50.892 05:16:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.892 05:16:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:50.892 05:16:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:52.798 05:16:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:52.798 05:16:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:52.798 05:16:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:52.798 05:16:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:52.798 05:16:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.798 05:16:49 -- common/autotest_common.sh@1197 -- # return 0 00:18:52.798 05:16:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.798 05:16:49 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:18:53.057 05:16:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:53.057 05:16:49 -- common/autotest_common.sh@1187 -- # local i=0 00:18:53.057 05:16:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.057 05:16:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:53.057 05:16:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:55.595 05:16:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:55.595 05:16:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:55.595 05:16:51 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:55.595 05:16:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:55.595 05:16:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.595 05:16:51 -- common/autotest_common.sh@1197 -- # return 0 00:18:55.595 05:16:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:55.595 05:16:51 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:18:55.595 05:16:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:55.595 05:16:52 -- common/autotest_common.sh@1187 -- # local i=0 00:18:55.595 05:16:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.595 05:16:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:55.595 05:16:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:57.502 05:16:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:57.502 05:16:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:57.502 05:16:54 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:57.502 05:16:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:57.502 05:16:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.502 05:16:54 -- common/autotest_common.sh@1197 -- # return 0 00:18:57.502 05:16:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.502 05:16:54 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:18:57.502 05:16:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:57.502 05:16:54 -- common/autotest_common.sh@1187 -- # local i=0 00:18:57.502 05:16:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.502 05:16:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:57.502 05:16:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:00.039 05:16:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:00.039 05:16:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:00.039 05:16:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:19:00.039 05:16:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:00.039 05:16:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.039 05:16:56 -- common/autotest_common.sh@1197 -- # return 0 00:19:00.039 05:16:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.039 05:16:56 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:19:00.039 05:16:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:00.039 05:16:56 -- common/autotest_common.sh@1187 -- # local i=0 00:19:00.039 05:16:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.039 05:16:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:00.039 05:16:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:01.957 05:16:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:01.957 05:16:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:01.957 05:16:58 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:19:01.957 05:16:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:01.957 05:16:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.957 05:16:58 -- common/autotest_common.sh@1197 -- # return 0 00:19:01.957 05:16:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.957 05:16:58 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:19:02.216 05:16:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:02.216 05:16:58 -- common/autotest_common.sh@1187 -- # local i=0 00:19:02.216 05:16:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.216 05:16:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:02.216 05:16:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:04.123 05:17:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:04.124 05:17:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:04.124 05:17:00 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:19:04.124 05:17:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:04.124 05:17:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.124 05:17:00 -- common/autotest_common.sh@1197 -- # return 0 00:19:04.124 05:17:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.124 05:17:00 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:19:04.383 05:17:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:04.383 05:17:01 -- common/autotest_common.sh@1187 -- # local i=0 00:19:04.383 05:17:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.383 05:17:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:04.383 05:17:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:06.288 05:17:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:06.288 05:17:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:06.288 05:17:03 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:19:06.288 05:17:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:06.288 05:17:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.288 05:17:03 -- common/autotest_common.sh@1197 -- # return 0 00:19:06.288 05:17:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:06.288 05:17:03 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:19:06.548 05:17:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:06.548 05:17:03 -- common/autotest_common.sh@1187 -- # local i=0 00:19:06.548 05:17:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:06.548 05:17:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:06.548 05:17:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:09.083 05:17:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:09.083 05:17:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:09.083 05:17:05 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:19:09.083 05:17:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:09.083 05:17:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.083 05:17:05 -- common/autotest_common.sh@1197 -- # return 0 00:19:09.083 05:17:05 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:09.083 [global] 00:19:09.083 thread=1 00:19:09.083 invalidate=1 00:19:09.083 rw=read 00:19:09.083 time_based=1 00:19:09.083 runtime=10 00:19:09.083 ioengine=libaio 00:19:09.083 direct=1 00:19:09.083 bs=262144 00:19:09.083 iodepth=64 00:19:09.084 norandommap=1 00:19:09.084 numjobs=1 00:19:09.084 00:19:09.084 [job0] 00:19:09.084 filename=/dev/nvme0n1 00:19:09.084 [job1] 00:19:09.084 filename=/dev/nvme10n1 00:19:09.084 [job2] 00:19:09.084 filename=/dev/nvme11n1 00:19:09.084 [job3] 00:19:09.084 filename=/dev/nvme2n1 00:19:09.084 [job4] 00:19:09.084 filename=/dev/nvme3n1 00:19:09.084 [job5] 00:19:09.084 filename=/dev/nvme4n1 00:19:09.084 [job6] 00:19:09.084 filename=/dev/nvme5n1 00:19:09.084 [job7] 00:19:09.084 filename=/dev/nvme6n1 00:19:09.084 [job8] 00:19:09.084 filename=/dev/nvme7n1 00:19:09.084 [job9] 00:19:09.084 filename=/dev/nvme8n1 00:19:09.084 [job10] 00:19:09.084 filename=/dev/nvme9n1 00:19:09.084 Could not set queue depth (nvme0n1) 00:19:09.084 Could not set queue depth (nvme10n1) 00:19:09.084 Could not set queue depth (nvme11n1) 00:19:09.084 Could not set queue depth (nvme2n1) 00:19:09.084 Could not set queue depth (nvme3n1) 00:19:09.084 Could not set queue depth (nvme4n1) 00:19:09.084 Could not set queue depth (nvme5n1) 00:19:09.084 Could not set queue depth (nvme6n1) 00:19:09.084 Could not set queue depth (nvme7n1) 00:19:09.084 Could not set queue depth (nvme8n1) 00:19:09.084 Could not set queue depth (nvme9n1) 00:19:09.084 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.084 fio-3.35 00:19:09.084 Starting 11 threads 00:19:21.294 00:19:21.294 job0: (groupid=0, jobs=1): err= 0: pid=310154: Wed Nov 20 05:17:16 2024 00:19:21.294 read: IOPS=1990, BW=498MiB/s (522MB/s)(4992MiB/10030msec) 00:19:21.294 slat (usec): min=10, max=20581, avg=494.16, stdev=1220.39 00:19:21.294 clat (usec): min=491, max=72188, avg=31621.94, stdev=7677.51 00:19:21.294 lat (usec): min=511, max=73439, avg=32116.10, stdev=7845.06 00:19:21.294 clat percentiles (usec): 00:19:21.294 | 1.00th=[10028], 5.00th=[23987], 10.00th=[24511], 20.00th=[25297], 00:19:21.294 | 30.00th=[25822], 40.00th=[28443], 50.00th=[33162], 60.00th=[33817], 00:19:21.294 | 70.00th=[34866], 80.00th=[37487], 90.00th=[38536], 95.00th=[41681], 00:19:21.294 | 99.00th=[55837], 99.50th=[57410], 99.90th=[60031], 99.95th=[63177], 00:19:21.294 | 99.99th=[71828] 00:19:21.294 bw ( KiB/s): min=398051, max=638464, per=11.02%, avg=509401.35, stdev=87903.62, samples=20 00:19:21.294 iops : min= 1554, max= 2494, avg=1989.80, stdev=343.43, samples=20 00:19:21.294 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:19:21.294 lat (msec) : 2=0.09%, 4=0.20%, 10=0.68%, 20=2.26%, 50=93.73% 00:19:21.294 lat (msec) : 100=3.01% 00:19:21.294 cpu : usr=0.63%, sys=5.17%, ctx=4736, majf=0, minf=4097 00:19:21.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:21.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.294 issued rwts: total=19968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.294 job1: (groupid=0, jobs=1): err= 0: pid=310155: Wed Nov 20 05:17:16 2024 00:19:21.294 read: IOPS=1348, BW=337MiB/s (354MB/s)(3385MiB/10039msec) 00:19:21.294 slat (usec): min=11, max=16514, avg=732.97, stdev=1683.04 00:19:21.294 clat (usec): min=10499, max=76688, avg=46669.23, stdev=5928.14 00:19:21.294 lat (usec): min=10701, max=76737, avg=47402.21, stdev=6152.07 00:19:21.294 clat percentiles (usec): 00:19:21.294 | 1.00th=[36963], 5.00th=[38536], 10.00th=[39060], 20.00th=[40109], 00:19:21.294 | 30.00th=[43779], 40.00th=[45876], 50.00th=[46924], 60.00th=[47449], 00:19:21.294 | 70.00th=[50070], 80.00th=[52167], 90.00th=[54264], 95.00th=[55313], 00:19:21.294 | 99.00th=[59507], 99.50th=[61604], 99.90th=[70779], 99.95th=[73925], 00:19:21.294 | 99.99th=[77071] 00:19:21.294 bw ( KiB/s): min=292864, max=407552, per=7.46%, avg=344964.95, stdev=32832.24, samples=20 00:19:21.294 iops : min= 1144, max= 1592, avg=1347.40, stdev=128.21, samples=20 00:19:21.294 lat (msec) : 20=0.18%, 50=69.99%, 100=29.82% 00:19:21.294 cpu : usr=0.41%, sys=4.55%, ctx=2980, majf=0, minf=4097 00:19:21.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:21.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.294 issued rwts: total=13541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.294 job2: (groupid=0, jobs=1): err= 0: pid=310156: Wed Nov 20 05:17:16 2024 00:19:21.294 read: IOPS=1490, BW=373MiB/s (391MB/s)(3742MiB/10040msec) 00:19:21.294 slat (usec): min=10, max=14299, avg=665.27, stdev=1487.74 00:19:21.294 clat (usec): min=1723, max=83502, avg=42216.99, stdev=9299.36 00:19:21.294 lat (usec): min=1751, max=83524, avg=42882.26, stdev=9498.30 00:19:21.294 clat percentiles (usec): 00:19:21.294 | 1.00th=[23725], 5.00th=[24773], 10.00th=[25560], 20.00th=[37487], 00:19:21.294 | 30.00th=[39060], 40.00th=[40109], 50.00th=[45351], 60.00th=[46400], 00:19:21.294 | 70.00th=[47449], 80.00th=[49546], 90.00th=[52691], 95.00th=[53740], 00:19:21.294 | 99.00th=[58459], 99.50th=[59507], 99.90th=[71828], 99.95th=[81265], 00:19:21.294 | 99.99th=[83362] 00:19:21.294 bw ( KiB/s): min=302080, max=603648, per=8.25%, avg=381486.20, stdev=81852.15, samples=20 00:19:21.294 iops : min= 1180, max= 2358, avg=1490.10, stdev=319.75, samples=20 00:19:21.294 lat (msec) : 2=0.01%, 4=0.09%, 10=0.13%, 20=0.23%, 50=80.29% 00:19:21.294 lat (msec) : 100=19.25% 00:19:21.294 cpu : usr=0.30%, sys=4.07%, ctx=3411, majf=0, minf=4097 00:19:21.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:21.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.294 issued rwts: total=14969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.294 job3: (groupid=0, jobs=1): err= 0: pid=310157: Wed Nov 20 05:17:16 2024 00:19:21.294 read: IOPS=1929, BW=482MiB/s (506MB/s)(4843MiB/10041msec) 00:19:21.294 slat (usec): min=11, max=35559, avg=497.02, stdev=1594.40 00:19:21.294 clat (usec): min=772, max=86585, avg=32645.85, stdev=15919.95 00:19:21.294 lat (usec): min=806, max=86974, avg=33142.87, stdev=16214.03 00:19:21.294 clat percentiles (usec): 00:19:21.294 | 1.00th=[ 4817], 5.00th=[11863], 10.00th=[12125], 20.00th=[12518], 00:19:21.294 | 30.00th=[23462], 40.00th=[26346], 50.00th=[37487], 60.00th=[41157], 00:19:21.294 | 70.00th=[47449], 80.00th=[48497], 90.00th=[50070], 95.00th=[51643], 00:19:21.294 | 99.00th=[57934], 99.50th=[62129], 99.90th=[73925], 99.95th=[81265], 00:19:21.294 | 99.99th=[86508] 00:19:21.294 bw ( KiB/s): min=309248, max=1308160, per=10.69%, avg=494169.15, stdev=276242.13, samples=20 00:19:21.294 iops : min= 1208, max= 5110, avg=1930.25, stdev=1079.11, samples=20 00:19:21.294 lat (usec) : 1000=0.04% 00:19:21.294 lat (msec) : 2=0.21%, 4=0.35%, 10=1.13%, 20=27.74%, 50=60.85% 00:19:21.294 lat (msec) : 100=9.68% 00:19:21.294 cpu : usr=0.39%, sys=5.04%, ctx=5112, majf=0, minf=4097 00:19:21.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:21.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.294 issued rwts: total=19371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.294 job4: (groupid=0, jobs=1): err= 0: pid=310158: Wed Nov 20 05:17:16 2024 00:19:21.294 read: IOPS=1447, BW=362MiB/s (379MB/s)(3634MiB/10040msec) 00:19:21.294 slat (usec): min=10, max=15087, avg=679.62, stdev=1599.53 00:19:21.294 clat (usec): min=890, max=85264, avg=43488.08, stdev=10905.73 00:19:21.294 lat (usec): min=929, max=85280, avg=44167.70, stdev=11140.17 00:19:21.294 clat percentiles (usec): 00:19:21.294 | 1.00th=[20579], 5.00th=[24249], 10.00th=[25035], 20.00th=[30540], 00:19:21.294 | 30.00th=[41157], 40.00th=[45876], 50.00th=[46924], 60.00th=[47973], 00:19:21.294 | 70.00th=[49546], 80.00th=[52691], 90.00th=[54264], 95.00th=[55313], 00:19:21.294 | 99.00th=[59507], 99.50th=[62653], 99.90th=[77071], 99.95th=[83362], 00:19:21.294 | 99.99th=[85459] 00:19:21.294 bw ( KiB/s): min=293376, max=640512, per=8.01%, avg=370328.75, stdev=99049.86, samples=20 00:19:21.294 iops : min= 1146, max= 2502, avg=1446.55, stdev=386.88, samples=20 00:19:21.294 lat (usec) : 1000=0.01% 00:19:21.294 lat (msec) : 2=0.03%, 4=0.02%, 10=0.36%, 20=0.56%, 50=70.96% 00:19:21.294 lat (msec) : 100=28.06% 00:19:21.294 cpu : usr=0.37%, sys=4.72%, ctx=3344, majf=0, minf=4097 00:19:21.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:21.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.294 issued rwts: total=14534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.294 job5: (groupid=0, jobs=1): err= 0: pid=310159: Wed Nov 20 05:17:16 2024 00:19:21.294 read: IOPS=1777, BW=444MiB/s (466MB/s)(4455MiB/10028msec) 00:19:21.294 slat (usec): min=10, max=10656, avg=551.67, stdev=1243.46 00:19:21.294 clat (usec): min=2831, max=64588, avg=35430.01, stdev=7485.04 00:19:21.294 lat (usec): min=2851, max=65044, avg=35981.68, stdev=7653.60 00:19:21.294 clat percentiles (usec): 00:19:21.294 | 1.00th=[18220], 5.00th=[24511], 10.00th=[25297], 20.00th=[32113], 00:19:21.294 | 30.00th=[33162], 40.00th=[33817], 50.00th=[34866], 60.00th=[36439], 00:19:21.294 | 70.00th=[37487], 80.00th=[38536], 90.00th=[48497], 95.00th=[50070], 00:19:21.294 | 99.00th=[55313], 99.50th=[56361], 99.90th=[58983], 99.95th=[60556], 00:19:21.294 | 99.99th=[62653] 00:19:21.294 bw ( KiB/s): min=319361, max=638976, per=9.83%, avg=454490.75, stdev=79184.68, samples=20 00:19:21.294 iops : min= 1247, max= 2496, avg=1775.30, stdev=309.35, samples=20 00:19:21.294 lat (msec) : 4=0.07%, 10=0.40%, 20=0.62%, 50=93.87%, 100=5.03% 00:19:21.294 cpu : usr=0.23%, sys=4.70%, ctx=4340, majf=0, minf=4097 00:19:21.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:21.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.294 issued rwts: total=17820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.294 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.294 job6: (groupid=0, jobs=1): err= 0: pid=310160: Wed Nov 20 05:17:16 2024 00:19:21.294 read: IOPS=1879, BW=470MiB/s (493MB/s)(4717MiB/10041msec) 00:19:21.294 slat (usec): min=10, max=40806, avg=512.35, stdev=1468.90 00:19:21.294 clat (usec): min=8702, max=97379, avg=33511.82, stdev=10856.30 00:19:21.294 lat (usec): min=8906, max=97397, avg=34024.17, stdev=11068.55 00:19:21.294 clat percentiles (usec): 00:19:21.294 | 1.00th=[11994], 5.00th=[13042], 10.00th=[24249], 20.00th=[25035], 00:19:21.294 | 30.00th=[25560], 40.00th=[26346], 50.00th=[33817], 60.00th=[36963], 00:19:21.294 | 70.00th=[38011], 80.00th=[46400], 90.00th=[48497], 95.00th=[49546], 00:19:21.294 | 99.00th=[57410], 99.50th=[60031], 99.90th=[72877], 99.95th=[79168], 00:19:21.294 | 99.99th=[96994] 00:19:21.294 bw ( KiB/s): min=320512, max=868151, per=10.41%, avg=481221.60, stdev=148052.75, samples=20 00:19:21.294 iops : min= 1252, max= 3391, avg=1879.75, stdev=578.29, samples=20 00:19:21.294 lat (msec) : 10=0.04%, 20=6.84%, 50=88.99%, 100=4.12% 00:19:21.294 cpu : usr=0.31%, sys=4.79%, ctx=4744, majf=0, minf=3722 00:19:21.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:21.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.294 issued rwts: total=18869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.295 job7: (groupid=0, jobs=1): err= 0: pid=310161: Wed Nov 20 05:17:16 2024 00:19:21.295 read: IOPS=1493, BW=373MiB/s (392MB/s)(3750MiB/10039msec) 00:19:21.295 slat (usec): min=12, max=12553, avg=660.13, stdev=1610.28 00:19:21.295 clat (usec): min=10692, max=84454, avg=42139.71, stdev=10439.65 00:19:21.295 lat (usec): min=10902, max=84492, avg=42799.83, stdev=10664.73 00:19:21.295 clat percentiles (usec): 00:19:21.295 | 1.00th=[23462], 5.00th=[25822], 10.00th=[26608], 20.00th=[27919], 00:19:21.295 | 30.00th=[38536], 40.00th=[40633], 50.00th=[45876], 60.00th=[47449], 00:19:21.295 | 70.00th=[48497], 80.00th=[51643], 90.00th=[54264], 95.00th=[55313], 00:19:21.295 | 99.00th=[60031], 99.50th=[62129], 99.90th=[79168], 99.95th=[82314], 00:19:21.295 | 99.99th=[84411] 00:19:21.295 bw ( KiB/s): min=293376, max=590848, per=8.27%, avg=382235.95, stdev=92958.18, samples=20 00:19:21.295 iops : min= 1146, max= 2308, avg=1493.05, stdev=363.11, samples=20 00:19:21.295 lat (msec) : 20=0.13%, 50=77.53%, 100=22.34% 00:19:21.295 cpu : usr=0.45%, sys=5.05%, ctx=3241, majf=0, minf=4097 00:19:21.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:21.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.295 issued rwts: total=14998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.295 job8: (groupid=0, jobs=1): err= 0: pid=310162: Wed Nov 20 05:17:16 2024 00:19:21.295 read: IOPS=1487, BW=372MiB/s (390MB/s)(3734MiB/10038msec) 00:19:21.295 slat (usec): min=13, max=15558, avg=666.62, stdev=1516.50 00:19:21.295 clat (usec): min=10092, max=84481, avg=42310.36, stdev=9090.10 00:19:21.295 lat (usec): min=10313, max=84518, avg=42976.98, stdev=9298.66 00:19:21.295 clat percentiles (usec): 00:19:21.295 | 1.00th=[23987], 5.00th=[24773], 10.00th=[25560], 20.00th=[37487], 00:19:21.295 | 30.00th=[39060], 40.00th=[40633], 50.00th=[45351], 60.00th=[46400], 00:19:21.295 | 70.00th=[47449], 80.00th=[49546], 90.00th=[52691], 95.00th=[53740], 00:19:21.295 | 99.00th=[58459], 99.50th=[60031], 99.90th=[73925], 99.95th=[81265], 00:19:21.295 | 99.99th=[84411] 00:19:21.295 bw ( KiB/s): min=301568, max=600064, per=8.23%, avg=380561.50, stdev=81544.93, samples=20 00:19:21.295 iops : min= 1178, max= 2344, avg=1486.40, stdev=318.47, samples=20 00:19:21.295 lat (msec) : 20=0.23%, 50=81.09%, 100=18.68% 00:19:21.295 cpu : usr=0.51%, sys=5.33%, ctx=3134, majf=0, minf=4097 00:19:21.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:21.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.295 issued rwts: total=14934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.295 job9: (groupid=0, jobs=1): err= 0: pid=310163: Wed Nov 20 05:17:16 2024 00:19:21.295 read: IOPS=1762, BW=441MiB/s (462MB/s)(4425MiB/10039msec) 00:19:21.295 slat (usec): min=10, max=30840, avg=554.30, stdev=1803.57 00:19:21.295 clat (usec): min=6785, max=83182, avg=35718.96, stdev=13670.18 00:19:21.295 lat (usec): min=6821, max=87472, avg=36273.26, stdev=13960.93 00:19:21.295 clat percentiles (usec): 00:19:21.295 | 1.00th=[ 8455], 5.00th=[12780], 10.00th=[13566], 20.00th=[25822], 00:19:21.295 | 30.00th=[26608], 40.00th=[27657], 50.00th=[38011], 60.00th=[41157], 00:19:21.295 | 70.00th=[47449], 80.00th=[48497], 90.00th=[52167], 95.00th=[54264], 00:19:21.295 | 99.00th=[57934], 99.50th=[63177], 99.90th=[79168], 99.95th=[81265], 00:19:21.295 | 99.99th=[82314] 00:19:21.295 bw ( KiB/s): min=286208, max=1178624, per=9.76%, avg=451405.40, stdev=205716.08, samples=20 00:19:21.295 iops : min= 1118, max= 4604, avg=1763.15, stdev=803.60, samples=20 00:19:21.295 lat (msec) : 10=1.07%, 20=12.48%, 50=73.31%, 100=13.14% 00:19:21.295 cpu : usr=0.31%, sys=4.20%, ctx=4443, majf=0, minf=4097 00:19:21.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:21.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.295 issued rwts: total=17698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.295 job10: (groupid=0, jobs=1): err= 0: pid=310164: Wed Nov 20 05:17:16 2024 00:19:21.295 read: IOPS=1459, BW=365MiB/s (383MB/s)(3659MiB/10030msec) 00:19:21.295 slat (usec): min=10, max=19640, avg=647.70, stdev=1616.60 00:19:21.295 clat (usec): min=7333, max=68750, avg=43167.70, stdev=9416.07 00:19:21.295 lat (usec): min=7538, max=69344, avg=43815.40, stdev=9640.28 00:19:21.295 clat percentiles (usec): 00:19:21.295 | 1.00th=[25035], 5.00th=[32375], 10.00th=[32900], 20.00th=[33424], 00:19:21.295 | 30.00th=[34341], 40.00th=[36963], 50.00th=[43779], 60.00th=[49021], 00:19:21.295 | 70.00th=[51119], 80.00th=[53216], 90.00th=[54264], 95.00th=[55313], 00:19:21.295 | 99.00th=[59507], 99.50th=[63177], 99.90th=[66323], 99.95th=[67634], 00:19:21.295 | 99.99th=[68682] 00:19:21.295 bw ( KiB/s): min=296960, max=487424, per=8.07%, avg=372965.60, stdev=72228.60, samples=20 00:19:21.295 iops : min= 1160, max= 1904, avg=1456.85, stdev=282.18, samples=20 00:19:21.295 lat (msec) : 10=0.14%, 20=0.33%, 50=65.71%, 100=33.82% 00:19:21.295 cpu : usr=0.34%, sys=4.88%, ctx=3778, majf=0, minf=4097 00:19:21.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:21.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.295 issued rwts: total=14636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.295 00:19:21.295 Run status group 0 (all jobs): 00:19:21.295 READ: bw=4515MiB/s (4734MB/s), 337MiB/s-498MiB/s (354MB/s-522MB/s), io=44.3GiB (47.5GB), run=10028-10041msec 00:19:21.295 00:19:21.295 Disk stats (read/write): 00:19:21.295 nvme0n1: ios=39803/0, merge=0/0, ticks=1227526/0, in_queue=1227526, util=97.76% 00:19:21.295 nvme10n1: ios=26946/0, merge=0/0, ticks=1228907/0, in_queue=1228907, util=97.96% 00:19:21.295 nvme11n1: ios=29793/0, merge=0/0, ticks=1226428/0, in_queue=1226428, util=98.06% 00:19:21.295 nvme2n1: ios=38609/0, merge=0/0, ticks=1228083/0, in_queue=1228083, util=98.18% 00:19:21.295 nvme3n1: ios=28932/0, merge=0/0, ticks=1227590/0, in_queue=1227590, util=98.21% 00:19:21.295 nvme4n1: ios=35510/0, merge=0/0, ticks=1226115/0, in_queue=1226115, util=98.47% 00:19:21.295 nvme5n1: ios=37610/0, merge=0/0, ticks=1225955/0, in_queue=1225955, util=98.59% 00:19:21.295 nvme6n1: ios=29869/0, merge=0/0, ticks=1227837/0, in_queue=1227837, util=98.64% 00:19:21.295 nvme7n1: ios=29741/0, merge=0/0, ticks=1228345/0, in_queue=1228345, util=98.97% 00:19:21.295 nvme8n1: ios=35232/0, merge=0/0, ticks=1224739/0, in_queue=1224739, util=99.08% 00:19:21.295 nvme9n1: ios=29131/0, merge=0/0, ticks=1230657/0, in_queue=1230657, util=99.20% 00:19:21.295 05:17:16 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:21.295 [global] 00:19:21.295 thread=1 00:19:21.295 invalidate=1 00:19:21.295 rw=randwrite 00:19:21.295 time_based=1 00:19:21.295 runtime=10 00:19:21.295 ioengine=libaio 00:19:21.295 direct=1 00:19:21.295 bs=262144 00:19:21.295 iodepth=64 00:19:21.295 norandommap=1 00:19:21.295 numjobs=1 00:19:21.295 00:19:21.295 [job0] 00:19:21.295 filename=/dev/nvme0n1 00:19:21.295 [job1] 00:19:21.295 filename=/dev/nvme10n1 00:19:21.295 [job2] 00:19:21.295 filename=/dev/nvme11n1 00:19:21.295 [job3] 00:19:21.295 filename=/dev/nvme2n1 00:19:21.295 [job4] 00:19:21.295 filename=/dev/nvme3n1 00:19:21.295 [job5] 00:19:21.295 filename=/dev/nvme4n1 00:19:21.295 [job6] 00:19:21.295 filename=/dev/nvme5n1 00:19:21.295 [job7] 00:19:21.295 filename=/dev/nvme6n1 00:19:21.295 [job8] 00:19:21.295 filename=/dev/nvme7n1 00:19:21.295 [job9] 00:19:21.295 filename=/dev/nvme8n1 00:19:21.295 [job10] 00:19:21.295 filename=/dev/nvme9n1 00:19:21.295 Could not set queue depth (nvme0n1) 00:19:21.295 Could not set queue depth (nvme10n1) 00:19:21.295 Could not set queue depth (nvme11n1) 00:19:21.295 Could not set queue depth (nvme2n1) 00:19:21.295 Could not set queue depth (nvme3n1) 00:19:21.295 Could not set queue depth (nvme4n1) 00:19:21.295 Could not set queue depth (nvme5n1) 00:19:21.295 Could not set queue depth (nvme6n1) 00:19:21.295 Could not set queue depth (nvme7n1) 00:19:21.295 Could not set queue depth (nvme8n1) 00:19:21.295 Could not set queue depth (nvme9n1) 00:19:21.295 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.295 fio-3.35 00:19:21.295 Starting 11 threads 00:19:31.272 00:19:31.272 job0: (groupid=0, jobs=1): err= 0: pid=311949: Wed Nov 20 05:17:27 2024 00:19:31.272 write: IOPS=1461, BW=365MiB/s (383MB/s)(3666MiB/10037msec); 0 zone resets 00:19:31.272 slat (usec): min=22, max=27746, avg=672.39, stdev=2288.00 00:19:31.272 clat (usec): min=4637, max=93200, avg=43117.42, stdev=9350.15 00:19:31.272 lat (usec): min=4687, max=93252, avg=43789.80, stdev=9706.46 00:19:31.272 clat percentiles (usec): 00:19:31.272 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32375], 20.00th=[33424], 00:19:31.272 | 30.00th=[33817], 40.00th=[35914], 50.00th=[48497], 60.00th=[49546], 00:19:31.272 | 70.00th=[50594], 80.00th=[51119], 90.00th=[52691], 95.00th=[53740], 00:19:31.272 | 99.00th=[64750], 99.50th=[68682], 99.90th=[78119], 99.95th=[85459], 00:19:31.272 | 99.99th=[90702] 00:19:31.272 bw ( KiB/s): min=307200, max=483328, per=9.65%, avg=373833.80, stdev=70175.85, samples=20 00:19:31.272 iops : min= 1200, max= 1888, avg=1460.35, stdev=273.97, samples=20 00:19:31.272 lat (msec) : 10=0.06%, 20=0.16%, 50=64.92%, 100=34.85% 00:19:31.272 cpu : usr=2.85%, sys=3.86%, ctx=2754, majf=0, minf=1 00:19:31.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:31.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.272 issued rwts: total=0,14665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.272 job1: (groupid=0, jobs=1): err= 0: pid=311961: Wed Nov 20 05:17:27 2024 00:19:31.272 write: IOPS=935, BW=234MiB/s (245MB/s)(2355MiB/10064msec); 0 zone resets 00:19:31.272 slat (usec): min=21, max=39416, avg=1049.67, stdev=3830.01 00:19:31.272 clat (msec): min=4, max=155, avg=67.31, stdev=19.38 00:19:31.272 lat (msec): min=4, max=155, avg=68.36, stdev=19.99 00:19:31.272 clat percentiles (msec): 00:19:31.272 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 51], 00:19:31.272 | 30.00th=[ 52], 40.00th=[ 54], 50.00th=[ 79], 60.00th=[ 80], 00:19:31.272 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 95], 00:19:31.272 | 99.00th=[ 110], 99.50th=[ 120], 99.90th=[ 133], 99.95th=[ 155], 00:19:31.272 | 99.99th=[ 157] 00:19:31.272 bw ( KiB/s): min=160768, max=449536, per=6.18%, avg=239519.60, stdev=74081.70, samples=20 00:19:31.272 iops : min= 628, max= 1756, avg=935.60, stdev=289.36, samples=20 00:19:31.272 lat (msec) : 10=0.10%, 20=0.10%, 50=21.24%, 100=76.93%, 250=1.65% 00:19:31.272 cpu : usr=2.00%, sys=2.38%, ctx=1946, majf=0, minf=1 00:19:31.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:31.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.272 issued rwts: total=0,9418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.272 job2: (groupid=0, jobs=1): err= 0: pid=311962: Wed Nov 20 05:17:27 2024 00:19:31.272 write: IOPS=2416, BW=604MiB/s (634MB/s)(6062MiB/10032msec); 0 zone resets 00:19:31.272 slat (usec): min=13, max=37159, avg=401.73, stdev=1541.92 00:19:31.272 clat (usec): min=6416, max=81841, avg=26070.99, stdev=13756.33 00:19:31.272 lat (usec): min=6488, max=89452, avg=26472.72, stdev=14021.38 00:19:31.272 clat percentiles (usec): 00:19:31.272 | 1.00th=[14615], 5.00th=[15270], 10.00th=[15533], 20.00th=[16057], 00:19:31.272 | 30.00th=[16319], 40.00th=[16712], 50.00th=[17171], 60.00th=[18744], 00:19:31.272 | 70.00th=[32900], 80.00th=[35914], 90.00th=[50594], 95.00th=[51643], 00:19:31.272 | 99.00th=[56361], 99.50th=[65274], 99.90th=[76022], 99.95th=[80217], 00:19:31.272 | 99.99th=[82314] 00:19:31.272 bw ( KiB/s): min=294912, max=987648, per=15.98%, avg=619084.80, stdev=294490.61, samples=20 00:19:31.272 iops : min= 1152, max= 3858, avg=2418.30, stdev=1150.35, samples=20 00:19:31.272 lat (msec) : 10=0.02%, 20=62.12%, 50=26.39%, 100=11.47% 00:19:31.272 cpu : usr=3.97%, sys=4.42%, ctx=4033, majf=0, minf=1 00:19:31.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:31.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.272 issued rwts: total=0,24246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.272 job3: (groupid=0, jobs=1): err= 0: pid=311963: Wed Nov 20 05:17:27 2024 00:19:31.272 write: IOPS=1001, BW=250MiB/s (263MB/s)(2520MiB/10064msec); 0 zone resets 00:19:31.272 slat (usec): min=20, max=38185, avg=969.22, stdev=3588.90 00:19:31.272 clat (msec): min=5, max=147, avg=62.90, stdev=23.96 00:19:31.272 lat (msec): min=5, max=147, avg=63.87, stdev=24.52 00:19:31.272 clat percentiles (msec): 00:19:31.272 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:19:31.272 | 30.00th=[ 36], 40.00th=[ 51], 50.00th=[ 79], 60.00th=[ 80], 00:19:31.272 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 95], 00:19:31.272 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 146], 99.95th=[ 148], 00:19:31.272 | 99.99th=[ 148] 00:19:31.272 bw ( KiB/s): min=164352, max=489984, per=6.62%, avg=256482.40, stdev=109540.61, samples=20 00:19:31.272 iops : min= 642, max= 1914, avg=1001.85, stdev=427.82, samples=20 00:19:31.272 lat (msec) : 10=0.07%, 20=0.23%, 50=38.33%, 100=59.84%, 250=1.54% 00:19:31.272 cpu : usr=2.13%, sys=2.68%, ctx=2077, majf=0, minf=1 00:19:31.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:31.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.272 issued rwts: total=0,10081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.272 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.272 job4: (groupid=0, jobs=1): err= 0: pid=311964: Wed Nov 20 05:17:27 2024 00:19:31.272 write: IOPS=2696, BW=674MiB/s (707MB/s)(6750MiB/10013msec); 0 zone resets 00:19:31.272 slat (usec): min=14, max=10868, avg=365.89, stdev=989.01 00:19:31.272 clat (usec): min=754, max=44394, avg=23362.81, stdev=8616.21 00:19:31.272 lat (usec): min=795, max=44621, avg=23728.70, stdev=8779.28 00:19:31.272 clat percentiles (usec): 00:19:31.272 | 1.00th=[10945], 5.00th=[15139], 10.00th=[15533], 20.00th=[16057], 00:19:31.272 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17433], 60.00th=[29492], 00:19:31.272 | 70.00th=[32637], 80.00th=[33424], 90.00th=[34341], 95.00th=[35390], 00:19:31.272 | 99.00th=[37487], 99.50th=[38536], 99.90th=[42206], 99.95th=[43254], 00:19:31.273 | 99.99th=[43779] 00:19:31.273 bw ( KiB/s): min=473088, max=989184, per=17.80%, avg=689631.25, stdev=239460.65, samples=20 00:19:31.273 iops : min= 1848, max= 3864, avg=2693.85, stdev=935.37, samples=20 00:19:31.273 lat (usec) : 1000=0.01% 00:19:31.273 lat (msec) : 2=0.06%, 4=0.10%, 10=0.73%, 20=58.43%, 50=40.67% 00:19:31.273 cpu : usr=4.34%, sys=5.16%, ctx=4550, majf=0, minf=1 00:19:31.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:31.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.273 issued rwts: total=0,26998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.273 job5: (groupid=0, jobs=1): err= 0: pid=311965: Wed Nov 20 05:17:27 2024 00:19:31.273 write: IOPS=766, BW=192MiB/s (201MB/s)(1928MiB/10058msec); 0 zone resets 00:19:31.273 slat (usec): min=23, max=75068, avg=1298.51, stdev=7054.76 00:19:31.273 clat (msec): min=24, max=177, avg=82.14, stdev= 8.69 00:19:31.273 lat (msec): min=69, max=177, avg=83.44, stdev=11.09 00:19:31.273 clat percentiles (msec): 00:19:31.273 | 1.00th=[ 77], 5.00th=[ 78], 10.00th=[ 78], 20.00th=[ 79], 00:19:31.273 | 30.00th=[ 80], 40.00th=[ 80], 50.00th=[ 80], 60.00th=[ 81], 00:19:31.273 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 96], 00:19:31.273 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 167], 99.95th=[ 169], 00:19:31.273 | 99.99th=[ 178] 00:19:31.273 bw ( KiB/s): min=159232, max=212992, per=5.05%, avg=195814.40, stdev=14060.49, samples=20 00:19:31.273 iops : min= 622, max= 832, avg=764.90, stdev=54.92, samples=20 00:19:31.273 lat (msec) : 50=0.01%, 100=98.37%, 250=1.62% 00:19:31.273 cpu : usr=1.56%, sys=1.83%, ctx=1486, majf=0, minf=1 00:19:31.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:31.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.273 issued rwts: total=0,7713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.273 job6: (groupid=0, jobs=1): err= 0: pid=311966: Wed Nov 20 05:17:27 2024 00:19:31.273 write: IOPS=767, BW=192MiB/s (201MB/s)(1930MiB/10063msec); 0 zone resets 00:19:31.273 slat (usec): min=22, max=62577, avg=1275.42, stdev=5653.97 00:19:31.273 clat (msec): min=22, max=152, avg=82.11, stdev= 9.25 00:19:31.273 lat (msec): min=22, max=164, avg=83.38, stdev=10.76 00:19:31.273 clat percentiles (msec): 00:19:31.273 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 78], 20.00th=[ 79], 00:19:31.273 | 30.00th=[ 80], 40.00th=[ 80], 50.00th=[ 80], 60.00th=[ 81], 00:19:31.273 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 96], 00:19:31.273 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 153], 00:19:31.273 | 99.99th=[ 153] 00:19:31.273 bw ( KiB/s): min=160768, max=219136, per=5.06%, avg=196064.25, stdev=14815.88, samples=20 00:19:31.273 iops : min= 628, max= 856, avg=765.85, stdev=57.88, samples=20 00:19:31.273 lat (msec) : 50=0.48%, 100=97.51%, 250=2.01% 00:19:31.273 cpu : usr=1.58%, sys=2.04%, ctx=1566, majf=0, minf=1 00:19:31.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:31.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.273 issued rwts: total=0,7721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.273 job7: (groupid=0, jobs=1): err= 0: pid=311967: Wed Nov 20 05:17:27 2024 00:19:31.273 write: IOPS=1164, BW=291MiB/s (305MB/s)(2930MiB/10064msec); 0 zone resets 00:19:31.273 slat (usec): min=16, max=46037, avg=825.16, stdev=2759.39 00:19:31.273 clat (usec): min=649, max=155822, avg=54115.51, stdev=22372.21 00:19:31.273 lat (usec): min=706, max=155858, avg=54940.67, stdev=22837.16 00:19:31.273 clat percentiles (msec): 00:19:31.273 | 1.00th=[ 10], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 34], 00:19:31.273 | 30.00th=[ 49], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:19:31.273 | 70.00th=[ 58], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 95], 00:19:31.273 | 99.00th=[ 99], 99.50th=[ 105], 99.90th=[ 138], 99.95th=[ 150], 00:19:31.273 | 99.99th=[ 157] 00:19:31.273 bw ( KiB/s): min=165376, max=588800, per=7.70%, avg=298387.50, stdev=115466.77, samples=20 00:19:31.273 iops : min= 646, max= 2300, avg=1165.55, stdev=451.07, samples=20 00:19:31.273 lat (usec) : 750=0.02%, 1000=0.03% 00:19:31.273 lat (msec) : 2=0.29%, 4=0.18%, 10=0.84%, 20=5.67%, 50=32.13% 00:19:31.273 lat (msec) : 100=60.19%, 250=0.67% 00:19:31.273 cpu : usr=2.50%, sys=3.37%, ctx=2638, majf=0, minf=1 00:19:31.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:31.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.273 issued rwts: total=0,11718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.273 job8: (groupid=0, jobs=1): err= 0: pid=311970: Wed Nov 20 05:17:27 2024 00:19:31.273 write: IOPS=1375, BW=344MiB/s (360MB/s)(3451MiB/10037msec); 0 zone resets 00:19:31.273 slat (usec): min=20, max=22153, avg=722.00, stdev=2197.52 00:19:31.273 clat (usec): min=4797, max=89939, avg=45803.29, stdev=8812.42 00:19:31.273 lat (usec): min=4837, max=89980, avg=46525.29, stdev=9148.21 00:19:31.273 clat percentiles (usec): 00:19:31.273 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33424], 20.00th=[34341], 00:19:31.273 | 30.00th=[36439], 40.00th=[49021], 50.00th=[50070], 60.00th=[50594], 00:19:31.273 | 70.00th=[51119], 80.00th=[51643], 90.00th=[53216], 95.00th=[54789], 00:19:31.273 | 99.00th=[66323], 99.50th=[68682], 99.90th=[78119], 99.95th=[85459], 00:19:31.273 | 99.99th=[85459] 00:19:31.273 bw ( KiB/s): min=286208, max=474624, per=9.08%, avg=351750.85, stdev=63318.45, samples=20 00:19:31.273 iops : min= 1118, max= 1854, avg=1374.00, stdev=247.35, samples=20 00:19:31.273 lat (msec) : 10=0.07%, 20=0.08%, 50=52.51%, 100=47.33% 00:19:31.273 cpu : usr=2.82%, sys=3.72%, ctx=2599, majf=0, minf=1 00:19:31.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:31.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.273 issued rwts: total=0,13802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.273 job9: (groupid=0, jobs=1): err= 0: pid=311971: Wed Nov 20 05:17:27 2024 00:19:31.273 write: IOPS=768, BW=192MiB/s (201MB/s)(1933MiB/10064msec); 0 zone resets 00:19:31.273 slat (usec): min=20, max=48291, avg=1283.32, stdev=4417.51 00:19:31.273 clat (msec): min=16, max=180, avg=81.98, stdev= 8.55 00:19:31.273 lat (msec): min=16, max=180, avg=83.26, stdev= 9.57 00:19:31.273 clat percentiles (msec): 00:19:31.273 | 1.00th=[ 62], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 79], 00:19:31.273 | 30.00th=[ 80], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:19:31.273 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 96], 00:19:31.273 | 99.00th=[ 111], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 140], 00:19:31.273 | 99.99th=[ 182] 00:19:31.273 bw ( KiB/s): min=159232, max=206848, per=5.07%, avg=196346.00, stdev=12914.72, samples=20 00:19:31.273 iops : min= 622, max= 808, avg=766.95, stdev=50.45, samples=20 00:19:31.273 lat (msec) : 20=0.13%, 50=0.44%, 100=97.25%, 250=2.19% 00:19:31.273 cpu : usr=1.57%, sys=2.36%, ctx=1671, majf=0, minf=1 00:19:31.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:31.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.273 issued rwts: total=0,7733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.273 job10: (groupid=0, jobs=1): err= 0: pid=311972: Wed Nov 20 05:17:27 2024 00:19:31.273 write: IOPS=1822, BW=456MiB/s (478MB/s)(4563MiB/10013msec); 0 zone resets 00:19:31.273 slat (usec): min=16, max=32021, avg=540.17, stdev=1633.80 00:19:31.273 clat (usec): min=1354, max=123680, avg=34564.82, stdev=19617.32 00:19:31.273 lat (usec): min=1424, max=123730, avg=35104.99, stdev=19972.96 00:19:31.273 clat percentiles (msec): 00:19:31.273 | 1.00th=[ 12], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 17], 00:19:31.273 | 30.00th=[ 25], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:19:31.273 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 79], 95.00th=[ 81], 00:19:31.273 | 99.00th=[ 93], 99.50th=[ 95], 99.90th=[ 99], 99.95th=[ 104], 00:19:31.273 | 99.99th=[ 124] 00:19:31.273 bw ( KiB/s): min=181760, max=976896, per=12.01%, avg=465587.20, stdev=239659.54, samples=20 00:19:31.273 iops : min= 710, max= 3816, avg=1818.70, stdev=936.17, samples=20 00:19:31.273 lat (msec) : 2=0.01%, 4=0.27%, 10=0.54%, 20=28.40%, 50=57.96% 00:19:31.273 lat (msec) : 100=12.77%, 250=0.05% 00:19:31.273 cpu : usr=4.07%, sys=4.13%, ctx=3399, majf=0, minf=1 00:19:31.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:31.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.273 issued rwts: total=0,18250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.273 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.273 00:19:31.273 Run status group 0 (all jobs): 00:19:31.273 WRITE: bw=3784MiB/s (3968MB/s), 192MiB/s-674MiB/s (201MB/s-707MB/s), io=37.2GiB (39.9GB), run=10013-10064msec 00:19:31.273 00:19:31.273 Disk stats (read/write): 00:19:31.273 nvme0n1: ios=49/28808, merge=0/0, ticks=18/1216570, in_queue=1216588, util=96.36% 00:19:31.273 nvme10n1: ios=0/18480, merge=0/0, ticks=0/1213511, in_queue=1213511, util=96.55% 00:19:31.273 nvme11n1: ios=0/47937, merge=0/0, ticks=0/1216005, in_queue=1216005, util=96.68% 00:19:31.273 nvme2n1: ios=0/19827, merge=0/0, ticks=0/1213976, in_queue=1213976, util=96.97% 00:19:31.273 nvme3n1: ios=0/52633, merge=0/0, ticks=0/1218387, in_queue=1218387, util=97.04% 00:19:31.273 nvme4n1: ios=0/15054, merge=0/0, ticks=0/1213203, in_queue=1213203, util=97.51% 00:19:31.273 nvme5n1: ios=0/15165, merge=0/0, ticks=0/1219962, in_queue=1219962, util=97.79% 00:19:31.273 nvme6n1: ios=0/23071, merge=0/0, ticks=0/1214897, in_queue=1214897, util=97.99% 00:19:31.273 nvme7n1: ios=0/27092, merge=0/0, ticks=0/1215986, in_queue=1215986, util=98.60% 00:19:31.273 nvme8n1: ios=0/15116, merge=0/0, ticks=0/1213379, in_queue=1213379, util=98.87% 00:19:31.273 nvme9n1: ios=0/35173, merge=0/0, ticks=0/1219094, in_queue=1219094, util=99.08% 00:19:31.273 05:17:27 -- target/multiconnection.sh@36 -- # sync 00:19:31.274 05:17:27 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:31.274 05:17:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:31.274 05:17:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:31.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:31.274 05:17:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:31.274 05:17:28 -- common/autotest_common.sh@1208 -- # local i=0 00:19:31.274 05:17:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:31.274 05:17:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:31.274 05:17:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:31.274 05:17:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:31.274 05:17:28 -- common/autotest_common.sh@1220 -- # return 0 00:19:31.274 05:17:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.274 05:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.274 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:19:31.274 05:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.274 05:17:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:31.274 05:17:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:32.212 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:32.212 05:17:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:32.212 05:17:28 -- common/autotest_common.sh@1208 -- # local i=0 00:19:32.212 05:17:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:32.212 05:17:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:32.212 05:17:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:32.212 05:17:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:32.212 05:17:28 -- common/autotest_common.sh@1220 -- # return 0 00:19:32.212 05:17:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:32.212 05:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.212 05:17:28 -- common/autotest_common.sh@10 -- # set +x 00:19:32.212 05:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.212 05:17:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:32.212 05:17:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:33.150 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:33.150 05:17:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:33.150 05:17:29 -- common/autotest_common.sh@1208 -- # local i=0 00:19:33.150 05:17:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:33.150 05:17:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:33.150 05:17:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:33.150 05:17:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:33.150 05:17:29 -- common/autotest_common.sh@1220 -- # return 0 00:19:33.150 05:17:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:33.150 05:17:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.150 05:17:29 -- common/autotest_common.sh@10 -- # set +x 00:19:33.150 05:17:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.150 05:17:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:33.150 05:17:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:34.088 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:34.088 05:17:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:34.088 05:17:30 -- common/autotest_common.sh@1208 -- # local i=0 00:19:34.088 05:17:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:34.088 05:17:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:34.088 05:17:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:34.088 05:17:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:34.088 05:17:30 -- common/autotest_common.sh@1220 -- # return 0 00:19:34.088 05:17:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:34.088 05:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.088 05:17:30 -- common/autotest_common.sh@10 -- # set +x 00:19:34.088 05:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.088 05:17:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.088 05:17:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:35.025 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:35.025 05:17:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:35.025 05:17:31 -- common/autotest_common.sh@1208 -- # local i=0 00:19:35.025 05:17:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:35.025 05:17:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:35.025 05:17:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:35.025 05:17:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:35.025 05:17:31 -- common/autotest_common.sh@1220 -- # return 0 00:19:35.025 05:17:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:35.025 05:17:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.025 05:17:31 -- common/autotest_common.sh@10 -- # set +x 00:19:35.025 05:17:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.025 05:17:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:35.025 05:17:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:35.962 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:35.962 05:17:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:35.962 05:17:32 -- common/autotest_common.sh@1208 -- # local i=0 00:19:35.962 05:17:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:35.962 05:17:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:35.962 05:17:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:35.962 05:17:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:35.962 05:17:32 -- common/autotest_common.sh@1220 -- # return 0 00:19:35.962 05:17:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:35.962 05:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.962 05:17:32 -- common/autotest_common.sh@10 -- # set +x 00:19:35.962 05:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.962 05:17:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:35.962 05:17:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:36.900 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:36.900 05:17:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:36.900 05:17:33 -- common/autotest_common.sh@1208 -- # local i=0 00:19:36.900 05:17:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:36.900 05:17:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:36.900 05:17:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:36.900 05:17:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:36.900 05:17:33 -- common/autotest_common.sh@1220 -- # return 0 00:19:36.900 05:17:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:36.900 05:17:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.900 05:17:33 -- common/autotest_common.sh@10 -- # set +x 00:19:36.900 05:17:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.900 05:17:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.900 05:17:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:37.836 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:37.837 05:17:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:37.837 05:17:34 -- common/autotest_common.sh@1208 -- # local i=0 00:19:37.837 05:17:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:37.837 05:17:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:37.837 05:17:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:37.837 05:17:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:37.837 05:17:34 -- common/autotest_common.sh@1220 -- # return 0 00:19:37.837 05:17:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:37.837 05:17:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.837 05:17:34 -- common/autotest_common.sh@10 -- # set +x 00:19:37.837 05:17:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.837 05:17:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:37.837 05:17:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:38.405 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:38.405 05:17:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:38.405 05:17:35 -- common/autotest_common.sh@1208 -- # local i=0 00:19:38.405 05:17:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:38.405 05:17:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:38.663 05:17:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:38.663 05:17:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:38.663 05:17:35 -- common/autotest_common.sh@1220 -- # return 0 00:19:38.663 05:17:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:38.663 05:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.663 05:17:35 -- common/autotest_common.sh@10 -- # set +x 00:19:38.663 05:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.663 05:17:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.663 05:17:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:39.598 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:39.598 05:17:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:39.598 05:17:36 -- common/autotest_common.sh@1208 -- # local i=0 00:19:39.598 05:17:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:39.598 05:17:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:39.598 05:17:36 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:39.598 05:17:36 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:39.598 05:17:36 -- common/autotest_common.sh@1220 -- # return 0 00:19:39.598 05:17:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:39.598 05:17:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.598 05:17:36 -- common/autotest_common.sh@10 -- # set +x 00:19:39.598 05:17:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.598 05:17:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:39.598 05:17:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:40.536 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:40.536 05:17:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:40.536 05:17:36 -- common/autotest_common.sh@1208 -- # local i=0 00:19:40.536 05:17:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:40.536 05:17:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:40.536 05:17:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:40.536 05:17:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:40.536 05:17:37 -- common/autotest_common.sh@1220 -- # return 0 00:19:40.536 05:17:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:40.536 05:17:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.536 05:17:37 -- common/autotest_common.sh@10 -- # set +x 00:19:40.536 05:17:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.536 05:17:37 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:40.536 05:17:37 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:40.536 05:17:37 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:40.536 05:17:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:40.536 05:17:37 -- nvmf/common.sh@116 -- # sync 00:19:40.536 05:17:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:40.536 05:17:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:40.536 05:17:37 -- nvmf/common.sh@119 -- # set +e 00:19:40.536 05:17:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:40.536 05:17:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:40.536 rmmod nvme_rdma 00:19:40.536 rmmod nvme_fabrics 00:19:40.536 05:17:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:40.536 05:17:37 -- nvmf/common.sh@123 -- # set -e 00:19:40.536 05:17:37 -- nvmf/common.sh@124 -- # return 0 00:19:40.536 05:17:37 -- nvmf/common.sh@477 -- # '[' -n 305208 ']' 00:19:40.536 05:17:37 -- nvmf/common.sh@478 -- # killprocess 305208 00:19:40.536 05:17:37 -- common/autotest_common.sh@936 -- # '[' -z 305208 ']' 00:19:40.536 05:17:37 -- common/autotest_common.sh@940 -- # kill -0 305208 00:19:40.536 05:17:37 -- common/autotest_common.sh@941 -- # uname 00:19:40.536 05:17:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.536 05:17:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 305208 00:19:40.536 05:17:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:40.536 05:17:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:40.536 05:17:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 305208' 00:19:40.536 killing process with pid 305208 00:19:40.536 05:17:37 -- common/autotest_common.sh@955 -- # kill 305208 00:19:40.536 05:17:37 -- common/autotest_common.sh@960 -- # wait 305208 00:19:40.795 05:17:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:40.795 05:17:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:40.795 00:19:40.795 real 1m3.928s 00:19:40.796 user 4m8.674s 00:19:40.796 sys 0m16.136s 00:19:40.796 05:17:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:40.796 05:17:37 -- common/autotest_common.sh@10 -- # set +x 00:19:40.796 ************************************ 00:19:40.796 END TEST nvmf_multiconnection 00:19:40.796 ************************************ 00:19:40.796 05:17:37 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:19:40.796 05:17:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:40.796 05:17:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:40.796 05:17:37 -- common/autotest_common.sh@10 -- # set +x 00:19:40.796 ************************************ 00:19:40.796 START TEST nvmf_initiator_timeout 00:19:40.796 ************************************ 00:19:40.796 05:17:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:19:41.055 * Looking for test storage... 00:19:41.055 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:19:41.055 05:17:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:41.055 05:17:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:41.055 05:17:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:41.055 05:17:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:41.055 05:17:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:41.055 05:17:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:41.055 05:17:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:41.055 05:17:37 -- scripts/common.sh@335 -- # IFS=.-: 00:19:41.055 05:17:37 -- scripts/common.sh@335 -- # read -ra ver1 00:19:41.055 05:17:37 -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.055 05:17:37 -- scripts/common.sh@336 -- # read -ra ver2 00:19:41.055 05:17:37 -- scripts/common.sh@337 -- # local 'op=<' 00:19:41.055 05:17:37 -- scripts/common.sh@339 -- # ver1_l=2 00:19:41.055 05:17:37 -- scripts/common.sh@340 -- # ver2_l=1 00:19:41.055 05:17:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:41.055 05:17:37 -- scripts/common.sh@343 -- # case "$op" in 00:19:41.055 05:17:37 -- scripts/common.sh@344 -- # : 1 00:19:41.055 05:17:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:41.055 05:17:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.055 05:17:37 -- scripts/common.sh@364 -- # decimal 1 00:19:41.055 05:17:37 -- scripts/common.sh@352 -- # local d=1 00:19:41.055 05:17:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.055 05:17:37 -- scripts/common.sh@354 -- # echo 1 00:19:41.055 05:17:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:41.055 05:17:37 -- scripts/common.sh@365 -- # decimal 2 00:19:41.055 05:17:37 -- scripts/common.sh@352 -- # local d=2 00:19:41.055 05:17:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.055 05:17:37 -- scripts/common.sh@354 -- # echo 2 00:19:41.055 05:17:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:41.055 05:17:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:41.055 05:17:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:41.055 05:17:37 -- scripts/common.sh@367 -- # return 0 00:19:41.055 05:17:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.055 05:17:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:41.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.055 --rc genhtml_branch_coverage=1 00:19:41.055 --rc genhtml_function_coverage=1 00:19:41.055 --rc genhtml_legend=1 00:19:41.055 --rc geninfo_all_blocks=1 00:19:41.055 --rc geninfo_unexecuted_blocks=1 00:19:41.055 00:19:41.055 ' 00:19:41.055 05:17:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:41.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.056 --rc genhtml_branch_coverage=1 00:19:41.056 --rc genhtml_function_coverage=1 00:19:41.056 --rc genhtml_legend=1 00:19:41.056 --rc geninfo_all_blocks=1 00:19:41.056 --rc geninfo_unexecuted_blocks=1 00:19:41.056 00:19:41.056 ' 00:19:41.056 05:17:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:41.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.056 --rc genhtml_branch_coverage=1 00:19:41.056 --rc genhtml_function_coverage=1 00:19:41.056 --rc genhtml_legend=1 00:19:41.056 --rc geninfo_all_blocks=1 00:19:41.056 --rc geninfo_unexecuted_blocks=1 00:19:41.056 00:19:41.056 ' 00:19:41.056 05:17:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:41.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.056 --rc genhtml_branch_coverage=1 00:19:41.056 --rc genhtml_function_coverage=1 00:19:41.056 --rc genhtml_legend=1 00:19:41.056 --rc geninfo_all_blocks=1 00:19:41.056 --rc geninfo_unexecuted_blocks=1 00:19:41.056 00:19:41.056 ' 00:19:41.056 05:17:37 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.056 05:17:37 -- nvmf/common.sh@7 -- # uname -s 00:19:41.056 05:17:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.056 05:17:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.056 05:17:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.056 05:17:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.056 05:17:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.056 05:17:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.056 05:17:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.056 05:17:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.056 05:17:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.056 05:17:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.056 05:17:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:41.056 05:17:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:41.056 05:17:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.056 05:17:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.056 05:17:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:41.056 05:17:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:19:41.056 05:17:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.056 05:17:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.056 05:17:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.056 05:17:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.056 05:17:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.056 05:17:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.056 05:17:37 -- paths/export.sh@5 -- # export PATH 00:19:41.056 05:17:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.056 05:17:37 -- nvmf/common.sh@46 -- # : 0 00:19:41.056 05:17:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:41.056 05:17:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:41.056 05:17:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:41.056 05:17:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.056 05:17:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.056 05:17:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:41.056 05:17:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:41.056 05:17:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:41.056 05:17:37 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.056 05:17:37 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.056 05:17:37 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:41.056 05:17:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:41.056 05:17:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.056 05:17:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:41.056 05:17:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:41.056 05:17:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:41.056 05:17:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.056 05:17:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.056 05:17:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.056 05:17:37 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:41.056 05:17:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:41.056 05:17:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:41.056 05:17:37 -- common/autotest_common.sh@10 -- # set +x 00:19:46.330 05:17:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:46.330 05:17:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:46.330 05:17:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:46.330 05:17:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:46.330 05:17:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:46.330 05:17:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:46.330 05:17:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:46.330 05:17:42 -- nvmf/common.sh@294 -- # net_devs=() 00:19:46.330 05:17:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:46.330 05:17:42 -- nvmf/common.sh@295 -- # e810=() 00:19:46.330 05:17:42 -- nvmf/common.sh@295 -- # local -ga e810 00:19:46.330 05:17:42 -- nvmf/common.sh@296 -- # x722=() 00:19:46.330 05:17:42 -- nvmf/common.sh@296 -- # local -ga x722 00:19:46.330 05:17:42 -- nvmf/common.sh@297 -- # mlx=() 00:19:46.330 05:17:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:46.330 05:17:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.330 05:17:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:46.330 05:17:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:46.330 05:17:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:46.330 05:17:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:46.330 05:17:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:46.330 05:17:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:46.330 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:46.330 05:17:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:46.330 05:17:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:46.330 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:46.330 05:17:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:46.330 05:17:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:46.330 05:17:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:19:46.330 05:17:42 -- nvmf/common.sh@376 -- # modinfo irdma 00:19:46.330 05:17:42 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:19:46.330 05:17:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.330 05:17:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.330 05:17:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.330 05:17:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:46.330 Found net devices under 0000:af:00.0: cvl_0_0 00:19:46.330 05:17:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.330 05:17:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.330 05:17:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.330 05:17:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.330 05:17:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:46.330 Found net devices under 0000:af:00.1: cvl_0_1 00:19:46.330 05:17:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.330 05:17:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:46.330 05:17:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:46.330 05:17:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:46.330 05:17:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:46.330 05:17:42 -- nvmf/common.sh@57 -- # uname 00:19:46.330 05:17:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:46.330 05:17:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:46.330 05:17:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:46.330 05:17:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:46.330 05:17:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:46.330 05:17:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:46.330 05:17:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:46.330 05:17:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:46.330 05:17:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:46.330 05:17:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:46.330 05:17:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:46.330 05:17:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:46.330 05:17:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:46.330 05:17:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:46.330 05:17:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:46.330 05:17:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:46.330 05:17:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:19:46.330 05:17:42 -- nvmf/common.sh@104 -- # continue 2 00:19:46.330 05:17:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.330 05:17:42 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:19:46.330 05:17:42 -- nvmf/common.sh@104 -- # continue 2 00:19:46.330 05:17:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:46.330 05:17:42 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:19:46.330 05:17:42 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:19:46.330 05:17:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:19:46.330 05:17:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:46.330 05:17:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:46.330 05:17:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:46.330 05:17:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:19:46.330 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:46.330 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:19:46.330 altname enp175s0f0np0 00:19:46.330 altname ens801f0np0 00:19:46.330 inet 192.168.100.8/24 scope global cvl_0_0 00:19:46.330 valid_lft forever preferred_lft forever 00:19:46.330 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:19:46.330 valid_lft forever preferred_lft forever 00:19:46.330 05:17:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:46.330 05:17:42 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:19:46.330 05:17:42 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:19:46.330 05:17:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:19:46.330 05:17:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:46.330 05:17:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:46.330 05:17:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:46.330 05:17:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:19:46.330 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:19:46.330 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:19:46.330 altname enp175s0f1np1 00:19:46.330 altname ens801f1np1 00:19:46.330 inet 192.168.100.9/24 scope global cvl_0_1 00:19:46.330 valid_lft forever preferred_lft forever 00:19:46.330 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:19:46.330 valid_lft forever preferred_lft forever 00:19:46.330 05:17:42 -- nvmf/common.sh@410 -- # return 0 00:19:46.330 05:17:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:46.330 05:17:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:46.330 05:17:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:46.330 05:17:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:46.330 05:17:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:46.331 05:17:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:46.331 05:17:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:46.331 05:17:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:46.331 05:17:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:46.331 05:17:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:46.331 05:17:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:46.331 05:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.331 05:17:43 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:19:46.331 05:17:43 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:19:46.331 05:17:43 -- nvmf/common.sh@104 -- # continue 2 00:19:46.331 05:17:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:46.331 05:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.331 05:17:43 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:19:46.331 05:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.331 05:17:43 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:19:46.331 05:17:43 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:19:46.331 05:17:43 -- nvmf/common.sh@104 -- # continue 2 00:19:46.331 05:17:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:46.331 05:17:43 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:19:46.331 05:17:43 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:19:46.331 05:17:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:19:46.331 05:17:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:46.331 05:17:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:46.331 05:17:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:46.331 05:17:43 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:19:46.331 05:17:43 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:19:46.331 05:17:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:19:46.331 05:17:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:46.331 05:17:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:46.331 05:17:43 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:46.331 192.168.100.9' 00:19:46.331 05:17:43 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:46.331 192.168.100.9' 00:19:46.331 05:17:43 -- nvmf/common.sh@445 -- # head -n 1 00:19:46.331 05:17:43 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:46.331 05:17:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:46.331 192.168.100.9' 00:19:46.331 05:17:43 -- nvmf/common.sh@446 -- # tail -n +2 00:19:46.331 05:17:43 -- nvmf/common.sh@446 -- # head -n 1 00:19:46.331 05:17:43 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:46.331 05:17:43 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:46.331 05:17:43 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:46.331 05:17:43 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:46.331 05:17:43 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:46.331 05:17:43 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:46.331 05:17:43 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:46.331 05:17:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:46.331 05:17:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.331 05:17:43 -- common/autotest_common.sh@10 -- # set +x 00:19:46.331 05:17:43 -- nvmf/common.sh@469 -- # nvmfpid=318183 00:19:46.331 05:17:43 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:46.331 05:17:43 -- nvmf/common.sh@470 -- # waitforlisten 318183 00:19:46.331 05:17:43 -- common/autotest_common.sh@829 -- # '[' -z 318183 ']' 00:19:46.331 05:17:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.331 05:17:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.331 05:17:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.331 05:17:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.331 05:17:43 -- common/autotest_common.sh@10 -- # set +x 00:19:46.331 [2024-11-20 05:17:43.108589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:46.331 [2024-11-20 05:17:43.108633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.331 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.590 [2024-11-20 05:17:43.164347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.590 [2024-11-20 05:17:43.239417] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:46.590 [2024-11-20 05:17:43.239522] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.590 [2024-11-20 05:17:43.239529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.590 [2024-11-20 05:17:43.239535] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.590 [2024-11-20 05:17:43.239576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.590 [2024-11-20 05:17:43.239697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.590 [2024-11-20 05:17:43.239785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.590 [2024-11-20 05:17:43.239787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.159 05:17:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.159 05:17:43 -- common/autotest_common.sh@862 -- # return 0 00:19:47.159 05:17:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:47.159 05:17:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.159 05:17:43 -- common/autotest_common.sh@10 -- # set +x 00:19:47.159 05:17:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.159 05:17:43 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:47.159 05:17:43 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:47.159 05:17:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.159 05:17:43 -- common/autotest_common.sh@10 -- # set +x 00:19:47.418 Malloc0 00:19:47.418 05:17:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.418 05:17:43 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:47.418 05:17:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.418 05:17:43 -- common/autotest_common.sh@10 -- # set +x 00:19:47.418 Delay0 00:19:47.418 05:17:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.418 05:17:43 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:47.418 05:17:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.418 05:17:43 -- common/autotest_common.sh@10 -- # set +x 00:19:47.418 [2024-11-20 05:17:44.015426] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x22d7b60/0x22dbff0) succeed. 00:19:47.418 [2024-11-20 05:17:44.024651] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x22d8dd0/0x22d7720) succeed. 00:19:47.418 [2024-11-20 05:17:44.024673] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:19:47.418 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.418 05:17:44 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:47.418 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.418 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:19:47.418 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.418 05:17:44 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.418 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.418 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:19:47.418 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.418 05:17:44 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:47.418 05:17:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.418 05:17:44 -- common/autotest_common.sh@10 -- # set +x 00:19:47.418 [2024-11-20 05:17:44.056950] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:47.418 05:17:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.418 05:17:44 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:47.677 05:17:44 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:47.677 05:17:44 -- common/autotest_common.sh@1187 -- # local i=0 00:19:47.677 05:17:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:47.677 05:17:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:47.677 05:17:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:49.580 05:17:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:49.580 05:17:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:49.580 05:17:46 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:49.580 05:17:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:49.580 05:17:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:49.580 05:17:46 -- common/autotest_common.sh@1197 -- # return 0 00:19:49.580 05:17:46 -- target/initiator_timeout.sh@35 -- # fio_pid=318845 00:19:49.580 05:17:46 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:49.580 05:17:46 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:49.580 [global] 00:19:49.580 thread=1 00:19:49.580 invalidate=1 00:19:49.580 rw=write 00:19:49.580 time_based=1 00:19:49.580 runtime=60 00:19:49.580 ioengine=libaio 00:19:49.580 direct=1 00:19:49.580 bs=4096 00:19:49.580 iodepth=1 00:19:49.580 norandommap=0 00:19:49.580 numjobs=1 00:19:49.580 00:19:49.580 verify_dump=1 00:19:49.580 verify_backlog=512 00:19:49.580 verify_state_save=0 00:19:49.580 do_verify=1 00:19:49.580 verify=crc32c-intel 00:19:49.580 [job0] 00:19:49.580 filename=/dev/nvme0n1 00:19:49.580 Could not set queue depth (nvme0n1) 00:19:49.839 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:49.839 fio-3.35 00:19:49.839 Starting 1 thread 00:19:53.130 05:17:49 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:53.130 05:17:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.130 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:19:53.130 true 00:19:53.130 05:17:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.130 05:17:49 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:53.130 05:17:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.130 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:19:53.130 true 00:19:53.130 05:17:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.130 05:17:49 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:53.130 05:17:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.130 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:19:53.130 true 00:19:53.130 05:17:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.130 05:17:49 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:53.130 05:17:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.130 05:17:49 -- common/autotest_common.sh@10 -- # set +x 00:19:53.130 true 00:19:53.130 05:17:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.130 05:17:49 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:55.664 05:17:52 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:55.664 05:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.664 05:17:52 -- common/autotest_common.sh@10 -- # set +x 00:19:55.664 true 00:19:55.664 05:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.664 05:17:52 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:55.664 05:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.664 05:17:52 -- common/autotest_common.sh@10 -- # set +x 00:19:55.664 true 00:19:55.664 05:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.664 05:17:52 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:55.664 05:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.664 05:17:52 -- common/autotest_common.sh@10 -- # set +x 00:19:55.664 true 00:19:55.664 05:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.664 05:17:52 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:55.664 05:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.664 05:17:52 -- common/autotest_common.sh@10 -- # set +x 00:19:55.664 true 00:19:55.664 05:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.664 05:17:52 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:55.664 05:17:52 -- target/initiator_timeout.sh@54 -- # wait 318845 00:20:51.902 00:20:51.902 job0: (groupid=0, jobs=1): err= 0: pid=319015: Wed Nov 20 05:18:46 2024 00:20:51.902 read: IOPS=1262, BW=5052KiB/s (5173kB/s)(296MiB/60000msec) 00:20:51.902 slat (nsec): min=6478, max=49894, avg=8034.19, stdev=1442.35 00:20:51.902 clat (usec): min=79, max=1071, avg=109.53, stdev= 7.85 00:20:51.902 lat (usec): min=101, max=1079, avg=117.56, stdev= 8.09 00:20:51.902 clat percentiles (usec): 00:20:51.902 | 1.00th=[ 99], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 105], 00:20:51.902 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 110], 60.00th=[ 111], 00:20:51.902 | 70.00th=[ 113], 80.00th=[ 115], 90.00th=[ 117], 95.00th=[ 120], 00:20:51.902 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 131], 99.95th=[ 135], 00:20:51.902 | 99.99th=[ 190] 00:20:51.902 write: IOPS=1267, BW=5068KiB/s (5190kB/s)(297MiB/60000msec); 0 zone resets 00:20:51.902 slat (usec): min=8, max=9174, avg=10.53, stdev=43.72 00:20:51.902 clat (usec): min=82, max=41747k, avg=657.03, stdev=151408.60 00:20:51.902 lat (usec): min=102, max=41747k, avg=667.56, stdev=151408.61 00:20:51.902 clat percentiles (usec): 00:20:51.902 | 1.00th=[ 97], 5.00th=[ 100], 10.00th=[ 101], 20.00th=[ 103], 00:20:51.902 | 30.00th=[ 105], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 110], 00:20:51.902 | 70.00th=[ 111], 80.00th=[ 113], 90.00th=[ 116], 95.00th=[ 118], 00:20:51.902 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 133], 00:20:51.902 | 99.99th=[ 184] 00:20:51.902 bw ( KiB/s): min= 3840, max=17392, per=100.00%, avg=16051.89, stdev=2847.54, samples=37 00:20:51.902 iops : min= 960, max= 4348, avg=4012.97, stdev=711.89, samples=37 00:20:51.902 lat (usec) : 100=4.35%, 250=95.65%, 500=0.01%, 1000=0.01% 00:20:51.902 lat (msec) : 2=0.01%, >=2000=0.01% 00:20:51.902 cpu : usr=1.73%, sys=2.94%, ctx=151811, majf=0, minf=107 00:20:51.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.902 issued rwts: total=75776,76025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.902 00:20:51.902 Run status group 0 (all jobs): 00:20:51.902 READ: bw=5052KiB/s (5173kB/s), 5052KiB/s-5052KiB/s (5173kB/s-5173kB/s), io=296MiB (310MB), run=60000-60000msec 00:20:51.902 WRITE: bw=5068KiB/s (5190kB/s), 5068KiB/s-5068KiB/s (5190kB/s-5190kB/s), io=297MiB (311MB), run=60000-60000msec 00:20:51.902 00:20:51.902 Disk stats (read/write): 00:20:51.902 nvme0n1: ios=75589/75685, merge=0/0, ticks=7685/7558, in_queue=15243, util=99.55% 00:20:51.902 05:18:46 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:51.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:51.902 05:18:47 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:51.902 05:18:47 -- common/autotest_common.sh@1208 -- # local i=0 00:20:51.902 05:18:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:51.902 05:18:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:51.902 05:18:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:51.902 05:18:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:51.902 05:18:47 -- common/autotest_common.sh@1220 -- # return 0 00:20:51.902 05:18:47 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:51.902 05:18:47 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:51.902 nvmf hotplug test: fio successful as expected 00:20:51.902 05:18:47 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.902 05:18:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.902 05:18:47 -- common/autotest_common.sh@10 -- # set +x 00:20:51.902 05:18:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.902 05:18:47 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:51.902 05:18:47 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:51.902 05:18:47 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:51.902 05:18:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:51.902 05:18:47 -- nvmf/common.sh@116 -- # sync 00:20:51.902 05:18:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:51.902 05:18:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:51.902 05:18:47 -- nvmf/common.sh@119 -- # set +e 00:20:51.902 05:18:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:51.902 05:18:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:51.902 rmmod nvme_rdma 00:20:51.902 rmmod nvme_fabrics 00:20:51.902 05:18:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:51.902 05:18:47 -- nvmf/common.sh@123 -- # set -e 00:20:51.902 05:18:47 -- nvmf/common.sh@124 -- # return 0 00:20:51.902 05:18:47 -- nvmf/common.sh@477 -- # '[' -n 318183 ']' 00:20:51.902 05:18:47 -- nvmf/common.sh@478 -- # killprocess 318183 00:20:51.902 05:18:47 -- common/autotest_common.sh@936 -- # '[' -z 318183 ']' 00:20:51.902 05:18:47 -- common/autotest_common.sh@940 -- # kill -0 318183 00:20:51.902 05:18:47 -- common/autotest_common.sh@941 -- # uname 00:20:51.902 05:18:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.902 05:18:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 318183 00:20:51.902 05:18:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:51.902 05:18:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:51.902 05:18:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 318183' 00:20:51.902 killing process with pid 318183 00:20:51.902 05:18:47 -- common/autotest_common.sh@955 -- # kill 318183 00:20:51.902 05:18:47 -- common/autotest_common.sh@960 -- # wait 318183 00:20:51.902 05:18:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:51.902 05:18:48 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:51.902 00:20:51.902 real 1m10.397s 00:20:51.902 user 4m26.591s 00:20:51.902 sys 0m6.319s 00:20:51.902 05:18:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:51.902 05:18:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.902 ************************************ 00:20:51.902 END TEST nvmf_initiator_timeout 00:20:51.902 ************************************ 00:20:51.902 05:18:48 -- nvmf/nvmf.sh@69 -- # [[ phy-fallback == phy ]] 00:20:51.902 05:18:48 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:51.902 05:18:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.902 05:18:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.902 05:18:48 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:51.902 05:18:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:51.902 05:18:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.902 05:18:48 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:51.903 05:18:48 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:20:51.903 05:18:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:51.903 05:18:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:51.903 05:18:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.903 ************************************ 00:20:51.903 START TEST nvmf_multicontroller 00:20:51.903 ************************************ 00:20:51.903 05:18:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:20:51.903 * Looking for test storage... 00:20:51.903 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:20:51.903 05:18:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:51.903 05:18:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:51.903 05:18:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:51.903 05:18:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:51.903 05:18:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:51.903 05:18:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:51.903 05:18:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:51.903 05:18:48 -- scripts/common.sh@335 -- # IFS=.-: 00:20:51.903 05:18:48 -- scripts/common.sh@335 -- # read -ra ver1 00:20:51.903 05:18:48 -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.903 05:18:48 -- scripts/common.sh@336 -- # read -ra ver2 00:20:51.903 05:18:48 -- scripts/common.sh@337 -- # local 'op=<' 00:20:51.903 05:18:48 -- scripts/common.sh@339 -- # ver1_l=2 00:20:51.903 05:18:48 -- scripts/common.sh@340 -- # ver2_l=1 00:20:51.903 05:18:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:51.903 05:18:48 -- scripts/common.sh@343 -- # case "$op" in 00:20:51.903 05:18:48 -- scripts/common.sh@344 -- # : 1 00:20:51.903 05:18:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:51.903 05:18:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.903 05:18:48 -- scripts/common.sh@364 -- # decimal 1 00:20:51.903 05:18:48 -- scripts/common.sh@352 -- # local d=1 00:20:51.903 05:18:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.903 05:18:48 -- scripts/common.sh@354 -- # echo 1 00:20:51.903 05:18:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:51.903 05:18:48 -- scripts/common.sh@365 -- # decimal 2 00:20:51.903 05:18:48 -- scripts/common.sh@352 -- # local d=2 00:20:51.903 05:18:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.903 05:18:48 -- scripts/common.sh@354 -- # echo 2 00:20:51.903 05:18:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:51.903 05:18:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:51.903 05:18:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:51.903 05:18:48 -- scripts/common.sh@367 -- # return 0 00:20:51.903 05:18:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.903 05:18:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.903 --rc genhtml_branch_coverage=1 00:20:51.903 --rc genhtml_function_coverage=1 00:20:51.903 --rc genhtml_legend=1 00:20:51.903 --rc geninfo_all_blocks=1 00:20:51.903 --rc geninfo_unexecuted_blocks=1 00:20:51.903 00:20:51.903 ' 00:20:51.903 05:18:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.903 --rc genhtml_branch_coverage=1 00:20:51.903 --rc genhtml_function_coverage=1 00:20:51.903 --rc genhtml_legend=1 00:20:51.903 --rc geninfo_all_blocks=1 00:20:51.903 --rc geninfo_unexecuted_blocks=1 00:20:51.903 00:20:51.903 ' 00:20:51.903 05:18:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.903 --rc genhtml_branch_coverage=1 00:20:51.903 --rc genhtml_function_coverage=1 00:20:51.903 --rc genhtml_legend=1 00:20:51.903 --rc geninfo_all_blocks=1 00:20:51.903 --rc geninfo_unexecuted_blocks=1 00:20:51.903 00:20:51.903 ' 00:20:51.903 05:18:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:51.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.903 --rc genhtml_branch_coverage=1 00:20:51.903 --rc genhtml_function_coverage=1 00:20:51.903 --rc genhtml_legend=1 00:20:51.903 --rc geninfo_all_blocks=1 00:20:51.903 --rc geninfo_unexecuted_blocks=1 00:20:51.903 00:20:51.903 ' 00:20:51.903 05:18:48 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.903 05:18:48 -- nvmf/common.sh@7 -- # uname -s 00:20:51.903 05:18:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.903 05:18:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.903 05:18:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.903 05:18:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.903 05:18:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.903 05:18:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.903 05:18:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.903 05:18:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.903 05:18:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.903 05:18:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.903 05:18:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:51.903 05:18:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:51.903 05:18:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.903 05:18:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.903 05:18:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:51.903 05:18:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:20:51.903 05:18:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.903 05:18:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.903 05:18:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.903 05:18:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.903 05:18:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.903 05:18:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.903 05:18:48 -- paths/export.sh@5 -- # export PATH 00:20:51.903 05:18:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.903 05:18:48 -- nvmf/common.sh@46 -- # : 0 00:20:51.903 05:18:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:51.903 05:18:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:51.903 05:18:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:51.903 05:18:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.903 05:18:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.903 05:18:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:51.903 05:18:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:51.903 05:18:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:51.903 05:18:48 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.903 05:18:48 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.903 05:18:48 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:51.903 05:18:48 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:51.903 05:18:48 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.903 05:18:48 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:20:51.903 05:18:48 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:51.903 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:51.903 05:18:48 -- host/multicontroller.sh@20 -- # exit 0 00:20:51.903 00:20:51.903 real 0m0.182s 00:20:51.903 user 0m0.115s 00:20:51.903 sys 0m0.078s 00:20:51.903 05:18:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:51.903 05:18:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.903 ************************************ 00:20:51.903 END TEST nvmf_multicontroller 00:20:51.903 ************************************ 00:20:51.903 05:18:48 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:20:51.903 05:18:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:51.903 05:18:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:51.903 05:18:48 -- common/autotest_common.sh@10 -- # set +x 00:20:51.903 ************************************ 00:20:51.903 START TEST nvmf_aer 00:20:51.903 ************************************ 00:20:51.903 05:18:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:20:51.903 * Looking for test storage... 00:20:51.903 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:20:51.903 05:18:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:51.903 05:18:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:51.903 05:18:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:51.903 05:18:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:51.904 05:18:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:51.904 05:18:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:51.904 05:18:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:51.904 05:18:48 -- scripts/common.sh@335 -- # IFS=.-: 00:20:51.904 05:18:48 -- scripts/common.sh@335 -- # read -ra ver1 00:20:51.904 05:18:48 -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.904 05:18:48 -- scripts/common.sh@336 -- # read -ra ver2 00:20:51.904 05:18:48 -- scripts/common.sh@337 -- # local 'op=<' 00:20:51.904 05:18:48 -- scripts/common.sh@339 -- # ver1_l=2 00:20:51.904 05:18:48 -- scripts/common.sh@340 -- # ver2_l=1 00:20:51.904 05:18:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:51.904 05:18:48 -- scripts/common.sh@343 -- # case "$op" in 00:20:51.904 05:18:48 -- scripts/common.sh@344 -- # : 1 00:20:51.904 05:18:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:51.904 05:18:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.904 05:18:48 -- scripts/common.sh@364 -- # decimal 1 00:20:51.904 05:18:48 -- scripts/common.sh@352 -- # local d=1 00:20:51.904 05:18:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.904 05:18:48 -- scripts/common.sh@354 -- # echo 1 00:20:51.904 05:18:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:51.904 05:18:48 -- scripts/common.sh@365 -- # decimal 2 00:20:51.904 05:18:48 -- scripts/common.sh@352 -- # local d=2 00:20:51.904 05:18:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.904 05:18:48 -- scripts/common.sh@354 -- # echo 2 00:20:51.904 05:18:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:51.904 05:18:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:51.904 05:18:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:51.904 05:18:48 -- scripts/common.sh@367 -- # return 0 00:20:51.904 05:18:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.904 05:18:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:51.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.904 --rc genhtml_branch_coverage=1 00:20:51.904 --rc genhtml_function_coverage=1 00:20:51.904 --rc genhtml_legend=1 00:20:51.904 --rc geninfo_all_blocks=1 00:20:51.904 --rc geninfo_unexecuted_blocks=1 00:20:51.904 00:20:51.904 ' 00:20:51.904 05:18:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:51.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.904 --rc genhtml_branch_coverage=1 00:20:51.904 --rc genhtml_function_coverage=1 00:20:51.904 --rc genhtml_legend=1 00:20:51.904 --rc geninfo_all_blocks=1 00:20:51.904 --rc geninfo_unexecuted_blocks=1 00:20:51.904 00:20:51.904 ' 00:20:51.904 05:18:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:51.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.904 --rc genhtml_branch_coverage=1 00:20:51.904 --rc genhtml_function_coverage=1 00:20:51.904 --rc genhtml_legend=1 00:20:51.904 --rc geninfo_all_blocks=1 00:20:51.904 --rc geninfo_unexecuted_blocks=1 00:20:51.904 00:20:51.904 ' 00:20:51.904 05:18:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:51.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.904 --rc genhtml_branch_coverage=1 00:20:51.904 --rc genhtml_function_coverage=1 00:20:51.904 --rc genhtml_legend=1 00:20:51.904 --rc geninfo_all_blocks=1 00:20:51.904 --rc geninfo_unexecuted_blocks=1 00:20:51.904 00:20:51.904 ' 00:20:51.904 05:18:48 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.904 05:18:48 -- nvmf/common.sh@7 -- # uname -s 00:20:51.904 05:18:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.904 05:18:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.904 05:18:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.904 05:18:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.904 05:18:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.904 05:18:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.904 05:18:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.904 05:18:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.904 05:18:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.904 05:18:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.904 05:18:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:51.904 05:18:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:51.904 05:18:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.904 05:18:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.904 05:18:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:51.904 05:18:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:20:51.904 05:18:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.904 05:18:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.904 05:18:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.904 05:18:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.904 05:18:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.904 05:18:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.904 05:18:48 -- paths/export.sh@5 -- # export PATH 00:20:51.904 05:18:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.904 05:18:48 -- nvmf/common.sh@46 -- # : 0 00:20:51.904 05:18:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:51.904 05:18:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:51.904 05:18:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:51.904 05:18:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.904 05:18:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.904 05:18:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:51.904 05:18:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:51.904 05:18:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:51.904 05:18:48 -- host/aer.sh@11 -- # nvmftestinit 00:20:51.904 05:18:48 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:51.904 05:18:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.904 05:18:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:51.904 05:18:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:51.904 05:18:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:51.904 05:18:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.904 05:18:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.904 05:18:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.904 05:18:48 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:51.904 05:18:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:51.904 05:18:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:51.904 05:18:48 -- common/autotest_common.sh@10 -- # set +x 00:20:57.190 05:18:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:57.190 05:18:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:57.190 05:18:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:57.190 05:18:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:57.190 05:18:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:57.190 05:18:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:57.190 05:18:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:57.190 05:18:53 -- nvmf/common.sh@294 -- # net_devs=() 00:20:57.190 05:18:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:57.190 05:18:53 -- nvmf/common.sh@295 -- # e810=() 00:20:57.190 05:18:53 -- nvmf/common.sh@295 -- # local -ga e810 00:20:57.190 05:18:53 -- nvmf/common.sh@296 -- # x722=() 00:20:57.190 05:18:53 -- nvmf/common.sh@296 -- # local -ga x722 00:20:57.190 05:18:53 -- nvmf/common.sh@297 -- # mlx=() 00:20:57.190 05:18:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:57.190 05:18:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.190 05:18:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:57.190 05:18:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:57.190 05:18:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:57.190 05:18:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:57.190 05:18:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:57.190 05:18:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:57.190 05:18:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:57.190 05:18:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:57.190 05:18:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:57.190 05:18:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:57.190 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:57.190 05:18:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:57.190 05:18:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:57.190 05:18:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:57.191 05:18:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:57.191 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:57.191 05:18:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:57.191 05:18:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:57.191 05:18:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:20:57.191 05:18:53 -- nvmf/common.sh@376 -- # modinfo irdma 00:20:57.191 05:18:53 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:20:57.191 05:18:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.191 05:18:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:57.191 05:18:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.191 05:18:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:57.191 Found net devices under 0000:af:00.0: cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.191 05:18:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.191 05:18:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:57.191 05:18:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.191 05:18:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:57.191 Found net devices under 0000:af:00.1: cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.191 05:18:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:57.191 05:18:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:57.191 05:18:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:57.191 05:18:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:57.191 05:18:53 -- nvmf/common.sh@57 -- # uname 00:20:57.191 05:18:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:57.191 05:18:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:57.191 05:18:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:57.191 05:18:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:57.191 05:18:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:57.191 05:18:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:57.191 05:18:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:57.191 05:18:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:57.191 05:18:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:57.191 05:18:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:57.191 05:18:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:57.191 05:18:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:57.191 05:18:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:57.191 05:18:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:57.191 05:18:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:57.191 05:18:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:57.191 05:18:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@104 -- # continue 2 00:20:57.191 05:18:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@104 -- # continue 2 00:20:57.191 05:18:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:57.191 05:18:53 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:57.191 05:18:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:57.191 05:18:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:20:57.191 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:20:57.191 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:20:57.191 altname enp175s0f0np0 00:20:57.191 altname ens801f0np0 00:20:57.191 inet 192.168.100.8/24 scope global cvl_0_0 00:20:57.191 valid_lft forever preferred_lft forever 00:20:57.191 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:20:57.191 valid_lft forever preferred_lft forever 00:20:57.191 05:18:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:57.191 05:18:53 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:57.191 05:18:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:57.191 05:18:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:20:57.191 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:20:57.191 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:20:57.191 altname enp175s0f1np1 00:20:57.191 altname ens801f1np1 00:20:57.191 inet 192.168.100.9/24 scope global cvl_0_1 00:20:57.191 valid_lft forever preferred_lft forever 00:20:57.191 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:20:57.191 valid_lft forever preferred_lft forever 00:20:57.191 05:18:53 -- nvmf/common.sh@410 -- # return 0 00:20:57.191 05:18:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:57.191 05:18:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:57.191 05:18:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:57.191 05:18:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:57.191 05:18:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:57.191 05:18:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:57.191 05:18:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:57.191 05:18:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:57.191 05:18:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:57.191 05:18:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@104 -- # continue 2 00:20:57.191 05:18:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.191 05:18:53 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:20:57.191 05:18:53 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@104 -- # continue 2 00:20:57.191 05:18:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:57.191 05:18:53 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:57.191 05:18:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:57.191 05:18:53 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:57.191 05:18:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:57.191 05:18:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:57.191 192.168.100.9' 00:20:57.191 05:18:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:57.191 192.168.100.9' 00:20:57.191 05:18:53 -- nvmf/common.sh@445 -- # head -n 1 00:20:57.191 05:18:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:57.191 05:18:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:57.191 192.168.100.9' 00:20:57.191 05:18:53 -- nvmf/common.sh@446 -- # tail -n +2 00:20:57.191 05:18:53 -- nvmf/common.sh@446 -- # head -n 1 00:20:57.191 05:18:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:57.191 05:18:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:57.191 05:18:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:57.191 05:18:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:57.191 05:18:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:57.192 05:18:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:57.192 05:18:53 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:57.192 05:18:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:57.192 05:18:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:57.192 05:18:53 -- common/autotest_common.sh@10 -- # set +x 00:20:57.192 05:18:53 -- nvmf/common.sh@469 -- # nvmfpid=332640 00:20:57.192 05:18:53 -- nvmf/common.sh@470 -- # waitforlisten 332640 00:20:57.192 05:18:53 -- common/autotest_common.sh@829 -- # '[' -z 332640 ']' 00:20:57.192 05:18:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.192 05:18:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.192 05:18:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.192 05:18:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.192 05:18:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:57.192 05:18:53 -- common/autotest_common.sh@10 -- # set +x 00:20:57.192 [2024-11-20 05:18:53.404099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:57.192 [2024-11-20 05:18:53.404156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.192 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.192 [2024-11-20 05:18:53.459366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.192 [2024-11-20 05:18:53.531584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:57.192 [2024-11-20 05:18:53.531690] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.192 [2024-11-20 05:18:53.531697] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.192 [2024-11-20 05:18:53.531703] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.192 [2024-11-20 05:18:53.531748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.192 [2024-11-20 05:18:53.531854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.192 [2024-11-20 05:18:53.531942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.192 [2024-11-20 05:18:53.531943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.452 05:18:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.452 05:18:54 -- common/autotest_common.sh@862 -- # return 0 00:20:57.452 05:18:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:57.452 05:18:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:57.452 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.452 05:18:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.452 05:18:54 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:57.452 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.452 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.452 [2024-11-20 05:18:54.274223] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1cd5100/0x1cd4740) succeed. 00:20:57.712 [2024-11-20 05:18:54.283283] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1cd6470/0x1cd4cc0) succeed. 00:20:57.712 [2024-11-20 05:18:54.283306] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:20:57.712 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.712 05:18:54 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:57.712 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.712 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.712 Malloc0 00:20:57.712 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.712 05:18:54 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:57.712 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.712 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.712 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.712 05:18:54 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.712 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.712 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.712 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.712 05:18:54 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:57.712 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.712 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.712 [2024-11-20 05:18:54.338236] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:57.712 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.712 05:18:54 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:57.712 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.712 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.712 [2024-11-20 05:18:54.346137] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:57.712 [ 00:20:57.712 { 00:20:57.712 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:57.712 "subtype": "Discovery", 00:20:57.712 "listen_addresses": [], 00:20:57.712 "allow_any_host": true, 00:20:57.712 "hosts": [] 00:20:57.712 }, 00:20:57.712 { 00:20:57.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.712 "subtype": "NVMe", 00:20:57.712 "listen_addresses": [ 00:20:57.712 { 00:20:57.712 "transport": "RDMA", 00:20:57.712 "trtype": "RDMA", 00:20:57.712 "adrfam": "IPv4", 00:20:57.712 "traddr": "192.168.100.8", 00:20:57.712 "trsvcid": "4420" 00:20:57.712 } 00:20:57.712 ], 00:20:57.712 "allow_any_host": true, 00:20:57.712 "hosts": [], 00:20:57.712 "serial_number": "SPDK00000000000001", 00:20:57.712 "model_number": "SPDK bdev Controller", 00:20:57.712 "max_namespaces": 2, 00:20:57.712 "min_cntlid": 1, 00:20:57.712 "max_cntlid": 65519, 00:20:57.712 "namespaces": [ 00:20:57.712 { 00:20:57.712 "nsid": 1, 00:20:57.712 "bdev_name": "Malloc0", 00:20:57.712 "name": "Malloc0", 00:20:57.712 "nguid": "BBEAFD417D5B4C79BF6F6182656A577D", 00:20:57.712 "uuid": "bbeafd41-7d5b-4c79-bf6f-6182656a577d" 00:20:57.712 } 00:20:57.712 ] 00:20:57.712 } 00:20:57.712 ] 00:20:57.712 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.712 05:18:54 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:57.712 05:18:54 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:57.712 05:18:54 -- host/aer.sh@33 -- # aerpid=332784 00:20:57.712 05:18:54 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:57.712 05:18:54 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:57.712 05:18:54 -- common/autotest_common.sh@1254 -- # local i=0 00:20:57.712 05:18:54 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:57.712 05:18:54 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:57.712 05:18:54 -- common/autotest_common.sh@1257 -- # i=1 00:20:57.712 05:18:54 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:57.712 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.712 05:18:54 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:57.712 05:18:54 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:57.712 05:18:54 -- common/autotest_common.sh@1257 -- # i=2 00:20:57.712 05:18:54 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:57.972 05:18:54 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:57.972 05:18:54 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:57.972 05:18:54 -- common/autotest_common.sh@1265 -- # return 0 00:20:57.972 05:18:54 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:57.972 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.972 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.972 Malloc1 00:20:57.972 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.972 05:18:54 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:57.972 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.972 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.972 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.972 05:18:54 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:57.972 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.972 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.972 [ 00:20:57.972 { 00:20:57.972 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:57.972 "subtype": "Discovery", 00:20:57.972 "listen_addresses": [], 00:20:57.972 "allow_any_host": true, 00:20:57.972 "hosts": [] 00:20:57.972 }, 00:20:57.972 { 00:20:57.972 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.972 "subtype": "NVMe", 00:20:57.972 "listen_addresses": [ 00:20:57.972 { 00:20:57.972 "transport": "RDMA", 00:20:57.972 "trtype": "RDMA", 00:20:57.972 "adrfam": "IPv4", 00:20:57.972 "traddr": "192.168.100.8", 00:20:57.973 "trsvcid": "4420" 00:20:57.973 } 00:20:57.973 ], 00:20:57.973 "allow_any_host": true, 00:20:57.973 "hosts": [], 00:20:57.973 "serial_number": "SPDK00000000000001", 00:20:57.973 "model_number": "SPDK bdev Controller", 00:20:57.973 "max_namespaces": 2, 00:20:57.973 "min_cntlid": 1, 00:20:57.973 "max_cntlid": 65519, 00:20:57.973 "namespaces": [ 00:20:57.973 { 00:20:57.973 "nsid": 1, 00:20:57.973 "bdev_name": "Malloc0", 00:20:57.973 "name": "Malloc0", 00:20:57.973 "nguid": "BBEAFD417D5B4C79BF6F6182656A577D", 00:20:57.973 "uuid": "bbeafd41-7d5b-4c79-bf6f-6182656a577d" 00:20:57.973 }, 00:20:57.973 { 00:20:57.973 "nsid": 2, 00:20:57.973 "bdev_name": "Malloc1", 00:20:57.973 "name": "Malloc1", 00:20:57.973 "nguid": "6C2D416BD2D6469BA351272179D91F41", 00:20:57.973 "uuid": "6c2d416b-d2d6-469b-a351-272179d91f41" 00:20:57.973 } 00:20:57.973 ] 00:20:57.973 } 00:20:57.973 ] 00:20:57.973 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.973 05:18:54 -- host/aer.sh@43 -- # wait 332784 00:20:57.973 Asynchronous Event Request test 00:20:57.973 Attaching to 192.168.100.8 00:20:57.973 Attached to 192.168.100.8 00:20:57.973 Registering asynchronous event callbacks... 00:20:57.973 Starting namespace attribute notice tests for all controllers... 00:20:57.973 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:57.973 aer_cb - Changed Namespace 00:20:57.973 Cleaning up... 00:20:57.973 05:18:54 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:57.973 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.973 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.973 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.973 05:18:54 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:57.973 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.973 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.973 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.973 05:18:54 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.973 05:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.973 05:18:54 -- common/autotest_common.sh@10 -- # set +x 00:20:57.973 05:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.973 05:18:54 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:57.973 05:18:54 -- host/aer.sh@51 -- # nvmftestfini 00:20:57.973 05:18:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:57.973 05:18:54 -- nvmf/common.sh@116 -- # sync 00:20:57.973 05:18:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:57.973 05:18:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:57.973 05:18:54 -- nvmf/common.sh@119 -- # set +e 00:20:57.973 05:18:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:57.973 05:18:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:57.973 rmmod nvme_rdma 00:20:57.973 rmmod nvme_fabrics 00:20:57.973 05:18:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:57.973 05:18:54 -- nvmf/common.sh@123 -- # set -e 00:20:57.973 05:18:54 -- nvmf/common.sh@124 -- # return 0 00:20:57.973 05:18:54 -- nvmf/common.sh@477 -- # '[' -n 332640 ']' 00:20:57.973 05:18:54 -- nvmf/common.sh@478 -- # killprocess 332640 00:20:57.973 05:18:54 -- common/autotest_common.sh@936 -- # '[' -z 332640 ']' 00:20:57.973 05:18:54 -- common/autotest_common.sh@940 -- # kill -0 332640 00:20:57.973 05:18:54 -- common/autotest_common.sh@941 -- # uname 00:20:57.973 05:18:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:57.973 05:18:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 332640 00:20:58.232 05:18:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:58.232 05:18:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:58.232 05:18:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 332640' 00:20:58.232 killing process with pid 332640 00:20:58.232 05:18:54 -- common/autotest_common.sh@955 -- # kill 332640 00:20:58.232 [2024-11-20 05:18:54.823302] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:58.232 05:18:54 -- common/autotest_common.sh@960 -- # wait 332640 00:20:58.232 05:18:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:58.232 05:18:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:58.232 00:20:58.232 real 0m6.741s 00:20:58.232 user 0m7.462s 00:20:58.232 sys 0m4.009s 00:20:58.232 05:18:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:58.232 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:20:58.232 ************************************ 00:20:58.232 END TEST nvmf_aer 00:20:58.232 ************************************ 00:20:58.492 05:18:55 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:20:58.492 05:18:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:58.492 05:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:58.492 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:20:58.492 ************************************ 00:20:58.492 START TEST nvmf_async_init 00:20:58.492 ************************************ 00:20:58.492 05:18:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:20:58.492 * Looking for test storage... 00:20:58.492 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:20:58.492 05:18:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:58.492 05:18:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:58.492 05:18:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:58.492 05:18:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:58.492 05:18:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:58.492 05:18:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:58.492 05:18:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:58.492 05:18:55 -- scripts/common.sh@335 -- # IFS=.-: 00:20:58.492 05:18:55 -- scripts/common.sh@335 -- # read -ra ver1 00:20:58.492 05:18:55 -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.492 05:18:55 -- scripts/common.sh@336 -- # read -ra ver2 00:20:58.492 05:18:55 -- scripts/common.sh@337 -- # local 'op=<' 00:20:58.492 05:18:55 -- scripts/common.sh@339 -- # ver1_l=2 00:20:58.492 05:18:55 -- scripts/common.sh@340 -- # ver2_l=1 00:20:58.492 05:18:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:58.492 05:18:55 -- scripts/common.sh@343 -- # case "$op" in 00:20:58.492 05:18:55 -- scripts/common.sh@344 -- # : 1 00:20:58.492 05:18:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:58.492 05:18:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.492 05:18:55 -- scripts/common.sh@364 -- # decimal 1 00:20:58.492 05:18:55 -- scripts/common.sh@352 -- # local d=1 00:20:58.492 05:18:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.492 05:18:55 -- scripts/common.sh@354 -- # echo 1 00:20:58.492 05:18:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:58.492 05:18:55 -- scripts/common.sh@365 -- # decimal 2 00:20:58.492 05:18:55 -- scripts/common.sh@352 -- # local d=2 00:20:58.492 05:18:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.492 05:18:55 -- scripts/common.sh@354 -- # echo 2 00:20:58.492 05:18:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:58.492 05:18:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:58.492 05:18:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:58.492 05:18:55 -- scripts/common.sh@367 -- # return 0 00:20:58.492 05:18:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.492 05:18:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:58.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.492 --rc genhtml_branch_coverage=1 00:20:58.492 --rc genhtml_function_coverage=1 00:20:58.492 --rc genhtml_legend=1 00:20:58.492 --rc geninfo_all_blocks=1 00:20:58.492 --rc geninfo_unexecuted_blocks=1 00:20:58.492 00:20:58.492 ' 00:20:58.492 05:18:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:58.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.492 --rc genhtml_branch_coverage=1 00:20:58.492 --rc genhtml_function_coverage=1 00:20:58.492 --rc genhtml_legend=1 00:20:58.492 --rc geninfo_all_blocks=1 00:20:58.492 --rc geninfo_unexecuted_blocks=1 00:20:58.492 00:20:58.492 ' 00:20:58.492 05:18:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:58.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.492 --rc genhtml_branch_coverage=1 00:20:58.492 --rc genhtml_function_coverage=1 00:20:58.492 --rc genhtml_legend=1 00:20:58.492 --rc geninfo_all_blocks=1 00:20:58.492 --rc geninfo_unexecuted_blocks=1 00:20:58.492 00:20:58.492 ' 00:20:58.492 05:18:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:58.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.492 --rc genhtml_branch_coverage=1 00:20:58.492 --rc genhtml_function_coverage=1 00:20:58.492 --rc genhtml_legend=1 00:20:58.492 --rc geninfo_all_blocks=1 00:20:58.492 --rc geninfo_unexecuted_blocks=1 00:20:58.492 00:20:58.492 ' 00:20:58.492 05:18:55 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.492 05:18:55 -- nvmf/common.sh@7 -- # uname -s 00:20:58.492 05:18:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.492 05:18:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.492 05:18:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.492 05:18:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.492 05:18:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.492 05:18:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.492 05:18:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.492 05:18:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.492 05:18:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.492 05:18:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.492 05:18:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:58.492 05:18:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:58.492 05:18:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.492 05:18:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.492 05:18:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:58.492 05:18:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:20:58.492 05:18:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.492 05:18:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.492 05:18:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.492 05:18:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.493 05:18:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.493 05:18:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.493 05:18:55 -- paths/export.sh@5 -- # export PATH 00:20:58.493 05:18:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.493 05:18:55 -- nvmf/common.sh@46 -- # : 0 00:20:58.493 05:18:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:58.493 05:18:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:58.493 05:18:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:58.493 05:18:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.493 05:18:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.493 05:18:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:58.493 05:18:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:58.493 05:18:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:58.493 05:18:55 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:58.493 05:18:55 -- host/async_init.sh@14 -- # null_block_size=512 00:20:58.493 05:18:55 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:58.493 05:18:55 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:58.493 05:18:55 -- host/async_init.sh@20 -- # uuidgen 00:20:58.493 05:18:55 -- host/async_init.sh@20 -- # tr -d - 00:20:58.493 05:18:55 -- host/async_init.sh@20 -- # nguid=8692443290b645f1aaceda283cd6da82 00:20:58.493 05:18:55 -- host/async_init.sh@22 -- # nvmftestinit 00:20:58.493 05:18:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:58.493 05:18:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.493 05:18:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:58.493 05:18:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:58.493 05:18:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:58.493 05:18:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.493 05:18:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.493 05:18:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.493 05:18:55 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:58.493 05:18:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:58.493 05:18:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:58.493 05:18:55 -- common/autotest_common.sh@10 -- # set +x 00:21:03.767 05:19:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:03.767 05:19:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:03.767 05:19:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:03.767 05:19:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:03.767 05:19:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:03.767 05:19:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:03.767 05:19:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:03.767 05:19:00 -- nvmf/common.sh@294 -- # net_devs=() 00:21:03.767 05:19:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:03.767 05:19:00 -- nvmf/common.sh@295 -- # e810=() 00:21:03.767 05:19:00 -- nvmf/common.sh@295 -- # local -ga e810 00:21:03.767 05:19:00 -- nvmf/common.sh@296 -- # x722=() 00:21:03.767 05:19:00 -- nvmf/common.sh@296 -- # local -ga x722 00:21:03.767 05:19:00 -- nvmf/common.sh@297 -- # mlx=() 00:21:03.767 05:19:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:03.767 05:19:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.767 05:19:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:03.767 05:19:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:03.767 05:19:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:03.767 05:19:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:03.767 05:19:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:03.767 05:19:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:03.767 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:03.767 05:19:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:03.767 05:19:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:03.767 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:03.767 05:19:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:03.767 05:19:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:03.767 05:19:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:21:03.767 05:19:00 -- nvmf/common.sh@376 -- # modinfo irdma 00:21:03.767 05:19:00 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:21:03.767 05:19:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.767 05:19:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:03.767 05:19:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.767 05:19:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:03.767 Found net devices under 0000:af:00.0: cvl_0_0 00:21:03.767 05:19:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.767 05:19:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.767 05:19:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:03.767 05:19:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.767 05:19:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:03.767 Found net devices under 0000:af:00.1: cvl_0_1 00:21:03.767 05:19:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.767 05:19:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:03.767 05:19:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:03.767 05:19:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:03.767 05:19:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:03.767 05:19:00 -- nvmf/common.sh@57 -- # uname 00:21:03.767 05:19:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:03.767 05:19:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:03.767 05:19:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:03.767 05:19:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:03.767 05:19:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:03.767 05:19:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:03.767 05:19:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:03.767 05:19:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:03.767 05:19:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:03.767 05:19:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:03.767 05:19:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:03.767 05:19:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:03.767 05:19:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:03.767 05:19:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:03.767 05:19:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:03.767 05:19:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:03.767 05:19:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:21:03.767 05:19:00 -- nvmf/common.sh@104 -- # continue 2 00:21:03.767 05:19:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.767 05:19:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:21:03.767 05:19:00 -- nvmf/common.sh@104 -- # continue 2 00:21:03.767 05:19:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:03.767 05:19:00 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:21:03.767 05:19:00 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:21:03.767 05:19:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:21:03.767 05:19:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:03.767 05:19:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:03.767 05:19:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:03.767 05:19:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:03.767 05:19:00 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:21:03.767 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:03.768 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:21:03.768 altname enp175s0f0np0 00:21:03.768 altname ens801f0np0 00:21:03.768 inet 192.168.100.8/24 scope global cvl_0_0 00:21:03.768 valid_lft forever preferred_lft forever 00:21:03.768 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:21:03.768 valid_lft forever preferred_lft forever 00:21:03.768 05:19:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:03.768 05:19:00 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:21:03.768 05:19:00 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:03.768 05:19:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:03.768 05:19:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:03.768 05:19:00 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:21:03.768 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:03.768 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:21:03.768 altname enp175s0f1np1 00:21:03.768 altname ens801f1np1 00:21:03.768 inet 192.168.100.9/24 scope global cvl_0_1 00:21:03.768 valid_lft forever preferred_lft forever 00:21:03.768 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:21:03.768 valid_lft forever preferred_lft forever 00:21:03.768 05:19:00 -- nvmf/common.sh@410 -- # return 0 00:21:03.768 05:19:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:03.768 05:19:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:03.768 05:19:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:03.768 05:19:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:03.768 05:19:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:03.768 05:19:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:03.768 05:19:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:03.768 05:19:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:03.768 05:19:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:03.768 05:19:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:03.768 05:19:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:03.768 05:19:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.768 05:19:00 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:03.768 05:19:00 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:21:03.768 05:19:00 -- nvmf/common.sh@104 -- # continue 2 00:21:03.768 05:19:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:03.768 05:19:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.768 05:19:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:03.768 05:19:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.768 05:19:00 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:03.768 05:19:00 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:21:03.768 05:19:00 -- nvmf/common.sh@104 -- # continue 2 00:21:03.768 05:19:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:03.768 05:19:00 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:21:03.768 05:19:00 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:03.768 05:19:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:03.768 05:19:00 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:21:03.768 05:19:00 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:03.768 05:19:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:03.768 05:19:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:03.768 192.168.100.9' 00:21:03.768 05:19:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:03.768 192.168.100.9' 00:21:03.768 05:19:00 -- nvmf/common.sh@445 -- # head -n 1 00:21:03.768 05:19:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:03.768 05:19:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:03.768 192.168.100.9' 00:21:03.768 05:19:00 -- nvmf/common.sh@446 -- # tail -n +2 00:21:03.768 05:19:00 -- nvmf/common.sh@446 -- # head -n 1 00:21:03.768 05:19:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:03.768 05:19:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:03.768 05:19:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:03.768 05:19:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:03.768 05:19:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:03.768 05:19:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:03.768 05:19:00 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:03.768 05:19:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:03.768 05:19:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:03.768 05:19:00 -- common/autotest_common.sh@10 -- # set +x 00:21:03.768 05:19:00 -- nvmf/common.sh@469 -- # nvmfpid=335945 00:21:03.768 05:19:00 -- nvmf/common.sh@470 -- # waitforlisten 335945 00:21:03.768 05:19:00 -- common/autotest_common.sh@829 -- # '[' -z 335945 ']' 00:21:03.768 05:19:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.768 05:19:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.768 05:19:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:03.768 05:19:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.768 05:19:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.768 05:19:00 -- common/autotest_common.sh@10 -- # set +x 00:21:03.768 [2024-11-20 05:19:00.507224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:03.768 [2024-11-20 05:19:00.507271] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.768 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.768 [2024-11-20 05:19:00.563009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.028 [2024-11-20 05:19:00.638145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:04.028 [2024-11-20 05:19:00.638267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.028 [2024-11-20 05:19:00.638274] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.028 [2024-11-20 05:19:00.638284] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.028 [2024-11-20 05:19:00.638300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.596 05:19:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.596 05:19:01 -- common/autotest_common.sh@862 -- # return 0 00:21:04.596 05:19:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:04.596 05:19:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:04.596 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.596 05:19:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.596 05:19:01 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:04.596 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.596 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.596 [2024-11-20 05:19:01.357885] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x2357fa0/0x23575e0) succeed. 00:21:04.596 [2024-11-20 05:19:01.366744] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x2359250/0x2357b60) succeed. 00:21:04.596 [2024-11-20 05:19:01.366767] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:21:04.596 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.596 05:19:01 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:04.596 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.596 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.596 null0 00:21:04.596 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.596 05:19:01 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:04.596 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.596 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.596 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.596 05:19:01 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:04.596 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.596 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.596 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.596 05:19:01 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8692443290b645f1aaceda283cd6da82 00:21:04.596 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.596 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.596 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.596 05:19:01 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:04.596 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.596 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.596 [2024-11-20 05:19:01.404841] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:04.596 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.596 05:19:01 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:04.596 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.596 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.856 nvme0n1 00:21:04.856 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.856 05:19:01 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:04.856 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.856 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.856 [ 00:21:04.856 { 00:21:04.856 "name": "nvme0n1", 00:21:04.856 "aliases": [ 00:21:04.856 "86924432-90b6-45f1-aace-da283cd6da82" 00:21:04.856 ], 00:21:04.856 "product_name": "NVMe disk", 00:21:04.856 "block_size": 512, 00:21:04.856 "num_blocks": 2097152, 00:21:04.856 "uuid": "86924432-90b6-45f1-aace-da283cd6da82", 00:21:04.856 "assigned_rate_limits": { 00:21:04.856 "rw_ios_per_sec": 0, 00:21:04.856 "rw_mbytes_per_sec": 0, 00:21:04.856 "r_mbytes_per_sec": 0, 00:21:04.856 "w_mbytes_per_sec": 0 00:21:04.856 }, 00:21:04.856 "claimed": false, 00:21:04.856 "zoned": false, 00:21:04.856 "supported_io_types": { 00:21:04.856 "read": true, 00:21:04.856 "write": true, 00:21:04.856 "unmap": false, 00:21:04.856 "write_zeroes": true, 00:21:04.856 "flush": true, 00:21:04.856 "reset": true, 00:21:04.856 "compare": true, 00:21:04.856 "compare_and_write": true, 00:21:04.856 "abort": true, 00:21:04.856 "nvme_admin": true, 00:21:04.856 "nvme_io": true 00:21:04.856 }, 00:21:04.856 "memory_domains": [ 00:21:04.856 { 00:21:04.856 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:04.856 "dma_device_type": 0 00:21:04.856 } 00:21:04.856 ], 00:21:04.856 "driver_specific": { 00:21:04.856 "nvme": [ 00:21:04.856 { 00:21:04.857 "trid": { 00:21:04.857 "trtype": "RDMA", 00:21:04.857 "adrfam": "IPv4", 00:21:04.857 "traddr": "192.168.100.8", 00:21:04.857 "trsvcid": "4420", 00:21:04.857 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:04.857 }, 00:21:04.857 "ctrlr_data": { 00:21:04.857 "cntlid": 1, 00:21:04.857 "vendor_id": "0x8086", 00:21:04.857 "model_number": "SPDK bdev Controller", 00:21:04.857 "serial_number": "00000000000000000000", 00:21:04.857 "firmware_revision": "24.01.1", 00:21:04.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.857 "oacs": { 00:21:04.857 "security": 0, 00:21:04.857 "format": 0, 00:21:04.857 "firmware": 0, 00:21:04.857 "ns_manage": 0 00:21:04.857 }, 00:21:04.857 "multi_ctrlr": true, 00:21:04.857 "ana_reporting": false 00:21:04.857 }, 00:21:04.857 "vs": { 00:21:04.857 "nvme_version": "1.3" 00:21:04.857 }, 00:21:04.857 "ns_data": { 00:21:04.857 "id": 1, 00:21:04.857 "can_share": true 00:21:04.857 } 00:21:04.857 } 00:21:04.857 ], 00:21:04.857 "mp_policy": "active_passive" 00:21:04.857 } 00:21:04.857 } 00:21:04.857 ] 00:21:04.857 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.857 05:19:01 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:04.857 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.857 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.857 [2024-11-20 05:19:01.502212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:04.857 [2024-11-20 05:19:01.527782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:04.857 [2024-11-20 05:19:01.550864] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:04.857 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.857 05:19:01 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:04.857 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.857 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.857 [ 00:21:04.857 { 00:21:04.857 "name": "nvme0n1", 00:21:04.857 "aliases": [ 00:21:04.857 "86924432-90b6-45f1-aace-da283cd6da82" 00:21:04.857 ], 00:21:04.857 "product_name": "NVMe disk", 00:21:04.857 "block_size": 512, 00:21:04.857 "num_blocks": 2097152, 00:21:04.857 "uuid": "86924432-90b6-45f1-aace-da283cd6da82", 00:21:04.857 "assigned_rate_limits": { 00:21:04.857 "rw_ios_per_sec": 0, 00:21:04.857 "rw_mbytes_per_sec": 0, 00:21:04.857 "r_mbytes_per_sec": 0, 00:21:04.857 "w_mbytes_per_sec": 0 00:21:04.857 }, 00:21:04.857 "claimed": false, 00:21:04.857 "zoned": false, 00:21:04.857 "supported_io_types": { 00:21:04.857 "read": true, 00:21:04.857 "write": true, 00:21:04.857 "unmap": false, 00:21:04.857 "write_zeroes": true, 00:21:04.857 "flush": true, 00:21:04.857 "reset": true, 00:21:04.857 "compare": true, 00:21:04.857 "compare_and_write": true, 00:21:04.857 "abort": true, 00:21:04.857 "nvme_admin": true, 00:21:04.857 "nvme_io": true 00:21:04.857 }, 00:21:04.857 "memory_domains": [ 00:21:04.857 { 00:21:04.857 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:04.857 "dma_device_type": 0 00:21:04.857 } 00:21:04.857 ], 00:21:04.857 "driver_specific": { 00:21:04.857 "nvme": [ 00:21:04.857 { 00:21:04.857 "trid": { 00:21:04.857 "trtype": "RDMA", 00:21:04.857 "adrfam": "IPv4", 00:21:04.857 "traddr": "192.168.100.8", 00:21:04.857 "trsvcid": "4420", 00:21:04.857 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:04.857 }, 00:21:04.857 "ctrlr_data": { 00:21:04.857 "cntlid": 2, 00:21:04.857 "vendor_id": "0x8086", 00:21:04.857 "model_number": "SPDK bdev Controller", 00:21:04.857 "serial_number": "00000000000000000000", 00:21:04.857 "firmware_revision": "24.01.1", 00:21:04.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.857 "oacs": { 00:21:04.857 "security": 0, 00:21:04.857 "format": 0, 00:21:04.857 "firmware": 0, 00:21:04.857 "ns_manage": 0 00:21:04.857 }, 00:21:04.857 "multi_ctrlr": true, 00:21:04.857 "ana_reporting": false 00:21:04.857 }, 00:21:04.857 "vs": { 00:21:04.857 "nvme_version": "1.3" 00:21:04.857 }, 00:21:04.857 "ns_data": { 00:21:04.857 "id": 1, 00:21:04.857 "can_share": true 00:21:04.857 } 00:21:04.857 } 00:21:04.857 ], 00:21:04.857 "mp_policy": "active_passive" 00:21:04.857 } 00:21:04.857 } 00:21:04.857 ] 00:21:04.857 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.857 05:19:01 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.857 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.857 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.857 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.857 05:19:01 -- host/async_init.sh@53 -- # mktemp 00:21:04.857 05:19:01 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5flzvuWzFt 00:21:04.857 05:19:01 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:04.857 05:19:01 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5flzvuWzFt 00:21:04.857 05:19:01 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:04.857 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.857 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.857 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.857 05:19:01 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:21:04.857 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.857 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.857 [2024-11-20 05:19:01.610231] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:04.857 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.857 05:19:01 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5flzvuWzFt 00:21:04.857 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.857 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.857 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.857 05:19:01 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5flzvuWzFt 00:21:04.857 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.857 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:04.857 [2024-11-20 05:19:01.626257] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.117 nvme0n1 00:21:05.117 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.117 05:19:01 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:05.117 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.117 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:05.117 [ 00:21:05.117 { 00:21:05.117 "name": "nvme0n1", 00:21:05.117 "aliases": [ 00:21:05.117 "86924432-90b6-45f1-aace-da283cd6da82" 00:21:05.117 ], 00:21:05.117 "product_name": "NVMe disk", 00:21:05.117 "block_size": 512, 00:21:05.117 "num_blocks": 2097152, 00:21:05.117 "uuid": "86924432-90b6-45f1-aace-da283cd6da82", 00:21:05.117 "assigned_rate_limits": { 00:21:05.117 "rw_ios_per_sec": 0, 00:21:05.117 "rw_mbytes_per_sec": 0, 00:21:05.117 "r_mbytes_per_sec": 0, 00:21:05.117 "w_mbytes_per_sec": 0 00:21:05.117 }, 00:21:05.117 "claimed": false, 00:21:05.117 "zoned": false, 00:21:05.117 "supported_io_types": { 00:21:05.117 "read": true, 00:21:05.117 "write": true, 00:21:05.117 "unmap": false, 00:21:05.117 "write_zeroes": true, 00:21:05.117 "flush": true, 00:21:05.117 "reset": true, 00:21:05.117 "compare": true, 00:21:05.117 "compare_and_write": true, 00:21:05.117 "abort": true, 00:21:05.117 "nvme_admin": true, 00:21:05.117 "nvme_io": true 00:21:05.117 }, 00:21:05.117 "memory_domains": [ 00:21:05.117 { 00:21:05.117 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:05.117 "dma_device_type": 0 00:21:05.117 } 00:21:05.117 ], 00:21:05.117 "driver_specific": { 00:21:05.117 "nvme": [ 00:21:05.117 { 00:21:05.117 "trid": { 00:21:05.117 "trtype": "RDMA", 00:21:05.117 "adrfam": "IPv4", 00:21:05.117 "traddr": "192.168.100.8", 00:21:05.117 "trsvcid": "4421", 00:21:05.117 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:05.117 }, 00:21:05.117 "ctrlr_data": { 00:21:05.117 "cntlid": 3, 00:21:05.117 "vendor_id": "0x8086", 00:21:05.117 "model_number": "SPDK bdev Controller", 00:21:05.117 "serial_number": "00000000000000000000", 00:21:05.117 "firmware_revision": "24.01.1", 00:21:05.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.117 "oacs": { 00:21:05.117 "security": 0, 00:21:05.117 "format": 0, 00:21:05.117 "firmware": 0, 00:21:05.117 "ns_manage": 0 00:21:05.117 }, 00:21:05.117 "multi_ctrlr": true, 00:21:05.117 "ana_reporting": false 00:21:05.117 }, 00:21:05.117 "vs": { 00:21:05.117 "nvme_version": "1.3" 00:21:05.117 }, 00:21:05.117 "ns_data": { 00:21:05.117 "id": 1, 00:21:05.117 "can_share": true 00:21:05.117 } 00:21:05.117 } 00:21:05.117 ], 00:21:05.117 "mp_policy": "active_passive" 00:21:05.117 } 00:21:05.117 } 00:21:05.117 ] 00:21:05.117 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.117 05:19:01 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.117 05:19:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.117 05:19:01 -- common/autotest_common.sh@10 -- # set +x 00:21:05.117 05:19:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.117 05:19:01 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5flzvuWzFt 00:21:05.117 05:19:01 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:05.117 05:19:01 -- host/async_init.sh@78 -- # nvmftestfini 00:21:05.117 05:19:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:05.117 05:19:01 -- nvmf/common.sh@116 -- # sync 00:21:05.117 05:19:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:05.117 05:19:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:05.117 05:19:01 -- nvmf/common.sh@119 -- # set +e 00:21:05.117 05:19:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:05.117 05:19:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:05.117 rmmod nvme_rdma 00:21:05.117 rmmod nvme_fabrics 00:21:05.117 05:19:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:05.117 05:19:01 -- nvmf/common.sh@123 -- # set -e 00:21:05.117 05:19:01 -- nvmf/common.sh@124 -- # return 0 00:21:05.117 05:19:01 -- nvmf/common.sh@477 -- # '[' -n 335945 ']' 00:21:05.117 05:19:01 -- nvmf/common.sh@478 -- # killprocess 335945 00:21:05.117 05:19:01 -- common/autotest_common.sh@936 -- # '[' -z 335945 ']' 00:21:05.117 05:19:01 -- common/autotest_common.sh@940 -- # kill -0 335945 00:21:05.117 05:19:01 -- common/autotest_common.sh@941 -- # uname 00:21:05.117 05:19:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.117 05:19:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 335945 00:21:05.117 05:19:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:05.117 05:19:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:05.117 05:19:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 335945' 00:21:05.117 killing process with pid 335945 00:21:05.117 05:19:01 -- common/autotest_common.sh@955 -- # kill 335945 00:21:05.117 05:19:01 -- common/autotest_common.sh@960 -- # wait 335945 00:21:05.376 05:19:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:05.376 05:19:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:05.376 00:21:05.376 real 0m6.930s 00:21:05.376 user 0m3.376s 00:21:05.376 sys 0m4.149s 00:21:05.376 05:19:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:05.376 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:05.376 ************************************ 00:21:05.376 END TEST nvmf_async_init 00:21:05.376 ************************************ 00:21:05.376 05:19:02 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:05.376 05:19:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:05.376 05:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.376 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:05.376 ************************************ 00:21:05.376 START TEST dma 00:21:05.376 ************************************ 00:21:05.376 05:19:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:05.376 * Looking for test storage... 00:21:05.376 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:21:05.376 05:19:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:05.376 05:19:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:05.376 05:19:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:05.636 05:19:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:05.636 05:19:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:05.636 05:19:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:05.636 05:19:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:05.636 05:19:02 -- scripts/common.sh@335 -- # IFS=.-: 00:21:05.636 05:19:02 -- scripts/common.sh@335 -- # read -ra ver1 00:21:05.636 05:19:02 -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.636 05:19:02 -- scripts/common.sh@336 -- # read -ra ver2 00:21:05.636 05:19:02 -- scripts/common.sh@337 -- # local 'op=<' 00:21:05.636 05:19:02 -- scripts/common.sh@339 -- # ver1_l=2 00:21:05.636 05:19:02 -- scripts/common.sh@340 -- # ver2_l=1 00:21:05.636 05:19:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:05.636 05:19:02 -- scripts/common.sh@343 -- # case "$op" in 00:21:05.636 05:19:02 -- scripts/common.sh@344 -- # : 1 00:21:05.636 05:19:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:05.636 05:19:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.636 05:19:02 -- scripts/common.sh@364 -- # decimal 1 00:21:05.636 05:19:02 -- scripts/common.sh@352 -- # local d=1 00:21:05.636 05:19:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.636 05:19:02 -- scripts/common.sh@354 -- # echo 1 00:21:05.636 05:19:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:05.636 05:19:02 -- scripts/common.sh@365 -- # decimal 2 00:21:05.636 05:19:02 -- scripts/common.sh@352 -- # local d=2 00:21:05.636 05:19:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.636 05:19:02 -- scripts/common.sh@354 -- # echo 2 00:21:05.636 05:19:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:05.636 05:19:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:05.636 05:19:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:05.636 05:19:02 -- scripts/common.sh@367 -- # return 0 00:21:05.636 05:19:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.636 05:19:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:05.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.636 --rc genhtml_branch_coverage=1 00:21:05.636 --rc genhtml_function_coverage=1 00:21:05.636 --rc genhtml_legend=1 00:21:05.636 --rc geninfo_all_blocks=1 00:21:05.636 --rc geninfo_unexecuted_blocks=1 00:21:05.636 00:21:05.636 ' 00:21:05.636 05:19:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:05.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.636 --rc genhtml_branch_coverage=1 00:21:05.636 --rc genhtml_function_coverage=1 00:21:05.636 --rc genhtml_legend=1 00:21:05.636 --rc geninfo_all_blocks=1 00:21:05.636 --rc geninfo_unexecuted_blocks=1 00:21:05.636 00:21:05.636 ' 00:21:05.636 05:19:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:05.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.636 --rc genhtml_branch_coverage=1 00:21:05.636 --rc genhtml_function_coverage=1 00:21:05.636 --rc genhtml_legend=1 00:21:05.636 --rc geninfo_all_blocks=1 00:21:05.636 --rc geninfo_unexecuted_blocks=1 00:21:05.636 00:21:05.636 ' 00:21:05.636 05:19:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:05.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.636 --rc genhtml_branch_coverage=1 00:21:05.636 --rc genhtml_function_coverage=1 00:21:05.636 --rc genhtml_legend=1 00:21:05.636 --rc geninfo_all_blocks=1 00:21:05.636 --rc geninfo_unexecuted_blocks=1 00:21:05.636 00:21:05.636 ' 00:21:05.636 05:19:02 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.636 05:19:02 -- nvmf/common.sh@7 -- # uname -s 00:21:05.636 05:19:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.636 05:19:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.636 05:19:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.636 05:19:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.636 05:19:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.636 05:19:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.636 05:19:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.636 05:19:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.636 05:19:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.636 05:19:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.636 05:19:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:05.636 05:19:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:05.636 05:19:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.636 05:19:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.636 05:19:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:05.636 05:19:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:21:05.636 05:19:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.636 05:19:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.636 05:19:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.636 05:19:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.636 05:19:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.637 05:19:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.637 05:19:02 -- paths/export.sh@5 -- # export PATH 00:21:05.637 05:19:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.637 05:19:02 -- nvmf/common.sh@46 -- # : 0 00:21:05.637 05:19:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:05.637 05:19:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:05.637 05:19:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:05.637 05:19:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.637 05:19:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.637 05:19:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:05.637 05:19:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:05.637 05:19:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:05.637 05:19:02 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:21:05.637 05:19:02 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:21:05.637 05:19:02 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:21:05.637 05:19:02 -- host/dma.sh@18 -- # subsystem=0 00:21:05.637 05:19:02 -- host/dma.sh@93 -- # nvmftestinit 00:21:05.637 05:19:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:05.637 05:19:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.637 05:19:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:05.637 05:19:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:05.637 05:19:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:05.637 05:19:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.637 05:19:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.637 05:19:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.637 05:19:02 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:05.637 05:19:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:05.637 05:19:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:05.637 05:19:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.913 05:19:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:10.913 05:19:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:10.913 05:19:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:10.913 05:19:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:10.913 05:19:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:10.913 05:19:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:10.913 05:19:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:10.913 05:19:07 -- nvmf/common.sh@294 -- # net_devs=() 00:21:10.913 05:19:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:10.913 05:19:07 -- nvmf/common.sh@295 -- # e810=() 00:21:10.913 05:19:07 -- nvmf/common.sh@295 -- # local -ga e810 00:21:10.913 05:19:07 -- nvmf/common.sh@296 -- # x722=() 00:21:10.913 05:19:07 -- nvmf/common.sh@296 -- # local -ga x722 00:21:10.913 05:19:07 -- nvmf/common.sh@297 -- # mlx=() 00:21:10.913 05:19:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:10.913 05:19:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.913 05:19:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:10.913 05:19:07 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:10.913 05:19:07 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:10.913 05:19:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:10.913 05:19:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:10.913 05:19:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:10.913 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:10.913 05:19:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:10.913 05:19:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:10.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:10.913 05:19:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:10.913 05:19:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:10.913 05:19:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:21:10.913 05:19:07 -- nvmf/common.sh@376 -- # modinfo irdma 00:21:10.913 05:19:07 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:21:10.913 05:19:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.913 05:19:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:10.913 05:19:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.913 05:19:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:10.913 Found net devices under 0000:af:00.0: cvl_0_0 00:21:10.913 05:19:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.913 05:19:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.913 05:19:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:10.913 05:19:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.913 05:19:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:10.913 Found net devices under 0000:af:00.1: cvl_0_1 00:21:10.913 05:19:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.913 05:19:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:10.913 05:19:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:10.913 05:19:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:10.913 05:19:07 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:10.913 05:19:07 -- nvmf/common.sh@57 -- # uname 00:21:10.913 05:19:07 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:10.913 05:19:07 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:10.913 05:19:07 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:10.913 05:19:07 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:10.913 05:19:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:10.913 05:19:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:10.913 05:19:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:10.913 05:19:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:10.913 05:19:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:10.913 05:19:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:10.913 05:19:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:10.913 05:19:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:10.913 05:19:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:10.913 05:19:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:10.913 05:19:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:10.913 05:19:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:10.913 05:19:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:21:10.913 05:19:07 -- nvmf/common.sh@104 -- # continue 2 00:21:10.913 05:19:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.913 05:19:07 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:21:10.913 05:19:07 -- nvmf/common.sh@104 -- # continue 2 00:21:10.913 05:19:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:10.913 05:19:07 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:21:10.913 05:19:07 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:21:10.913 05:19:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:21:10.913 05:19:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:10.913 05:19:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:10.913 05:19:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:10.913 05:19:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:21:10.913 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:10.913 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:21:10.913 altname enp175s0f0np0 00:21:10.913 altname ens801f0np0 00:21:10.913 inet 192.168.100.8/24 scope global cvl_0_0 00:21:10.913 valid_lft forever preferred_lft forever 00:21:10.913 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:21:10.913 valid_lft forever preferred_lft forever 00:21:10.913 05:19:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:10.913 05:19:07 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:21:10.913 05:19:07 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:21:10.913 05:19:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:21:10.913 05:19:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:10.913 05:19:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:10.913 05:19:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:10.913 05:19:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:10.913 05:19:07 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:21:10.913 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:10.913 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:21:10.913 altname enp175s0f1np1 00:21:10.913 altname ens801f1np1 00:21:10.913 inet 192.168.100.9/24 scope global cvl_0_1 00:21:10.913 valid_lft forever preferred_lft forever 00:21:10.913 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:21:10.914 valid_lft forever preferred_lft forever 00:21:10.914 05:19:07 -- nvmf/common.sh@410 -- # return 0 00:21:10.914 05:19:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:10.914 05:19:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:10.914 05:19:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:10.914 05:19:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:10.914 05:19:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:10.914 05:19:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:10.914 05:19:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:10.914 05:19:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:10.914 05:19:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:10.914 05:19:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:10.914 05:19:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:10.914 05:19:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.914 05:19:07 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:10.914 05:19:07 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:21:10.914 05:19:07 -- nvmf/common.sh@104 -- # continue 2 00:21:10.914 05:19:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:10.914 05:19:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.914 05:19:07 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:10.914 05:19:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.914 05:19:07 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:10.914 05:19:07 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:21:10.914 05:19:07 -- nvmf/common.sh@104 -- # continue 2 00:21:10.914 05:19:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:10.914 05:19:07 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:21:10.914 05:19:07 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:21:10.914 05:19:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:21:10.914 05:19:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:10.914 05:19:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:10.914 05:19:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:10.914 05:19:07 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:21:10.914 05:19:07 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:21:10.914 05:19:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:21:10.914 05:19:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:10.914 05:19:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:10.914 05:19:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:10.914 192.168.100.9' 00:21:10.914 05:19:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:10.914 192.168.100.9' 00:21:10.914 05:19:07 -- nvmf/common.sh@445 -- # head -n 1 00:21:10.914 05:19:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:10.914 05:19:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:10.914 192.168.100.9' 00:21:10.914 05:19:07 -- nvmf/common.sh@446 -- # tail -n +2 00:21:10.914 05:19:07 -- nvmf/common.sh@446 -- # head -n 1 00:21:10.914 05:19:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:10.914 05:19:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:10.914 05:19:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:10.914 05:19:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:10.914 05:19:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:10.914 05:19:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:10.914 05:19:07 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:21:10.914 05:19:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:10.914 05:19:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.914 05:19:07 -- common/autotest_common.sh@10 -- # set +x 00:21:10.914 05:19:07 -- nvmf/common.sh@469 -- # nvmfpid=339245 00:21:10.914 05:19:07 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:10.914 05:19:07 -- nvmf/common.sh@470 -- # waitforlisten 339245 00:21:10.914 05:19:07 -- common/autotest_common.sh@829 -- # '[' -z 339245 ']' 00:21:10.914 05:19:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.914 05:19:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.914 05:19:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.914 05:19:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.914 05:19:07 -- common/autotest_common.sh@10 -- # set +x 00:21:10.914 [2024-11-20 05:19:07.605023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:10.914 [2024-11-20 05:19:07.605067] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.914 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.914 [2024-11-20 05:19:07.661329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:10.914 [2024-11-20 05:19:07.735313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:10.914 [2024-11-20 05:19:07.735438] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.914 [2024-11-20 05:19:07.735447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.914 [2024-11-20 05:19:07.735453] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.914 [2024-11-20 05:19:07.735496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.914 [2024-11-20 05:19:07.735500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.850 05:19:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.850 05:19:08 -- common/autotest_common.sh@862 -- # return 0 00:21:11.850 05:19:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:11.850 05:19:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.850 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.850 05:19:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.850 05:19:08 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:11.850 05:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.850 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.850 [2024-11-20 05:19:08.468845] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x24dcab0/0x24dc0f0) succeed. 00:21:11.850 [2024-11-20 05:19:08.477514] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x24ddd60/0x24dc670) succeed. 00:21:11.850 [2024-11-20 05:19:08.477539] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:21:11.850 05:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.850 05:19:08 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:21:11.850 05:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.850 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.850 Malloc0 00:21:11.850 05:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.850 05:19:08 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:11.850 05:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.850 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.850 05:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.850 05:19:08 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:11.850 05:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.850 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.850 05:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.850 05:19:08 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:11.850 05:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.850 05:19:08 -- common/autotest_common.sh@10 -- # set +x 00:21:11.850 [2024-11-20 05:19:08.567631] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:11.850 05:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.850 05:19:08 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:21:11.850 05:19:08 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:21:11.850 05:19:08 -- nvmf/common.sh@520 -- # config=() 00:21:11.850 05:19:08 -- nvmf/common.sh@520 -- # local subsystem config 00:21:11.850 05:19:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:11.850 05:19:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:11.850 { 00:21:11.850 "params": { 00:21:11.850 "name": "Nvme$subsystem", 00:21:11.850 "trtype": "$TEST_TRANSPORT", 00:21:11.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.850 "adrfam": "ipv4", 00:21:11.850 "trsvcid": "$NVMF_PORT", 00:21:11.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.850 "hdgst": ${hdgst:-false}, 00:21:11.850 "ddgst": ${ddgst:-false} 00:21:11.850 }, 00:21:11.850 "method": "bdev_nvme_attach_controller" 00:21:11.850 } 00:21:11.850 EOF 00:21:11.850 )") 00:21:11.850 05:19:08 -- nvmf/common.sh@542 -- # cat 00:21:11.850 05:19:08 -- nvmf/common.sh@544 -- # jq . 00:21:11.850 05:19:08 -- nvmf/common.sh@545 -- # IFS=, 00:21:11.850 05:19:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:11.850 "params": { 00:21:11.850 "name": "Nvme0", 00:21:11.850 "trtype": "rdma", 00:21:11.850 "traddr": "192.168.100.8", 00:21:11.850 "adrfam": "ipv4", 00:21:11.850 "trsvcid": "4420", 00:21:11.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:11.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:11.850 "hdgst": false, 00:21:11.850 "ddgst": false 00:21:11.850 }, 00:21:11.850 "method": "bdev_nvme_attach_controller" 00:21:11.850 }' 00:21:11.850 [2024-11-20 05:19:08.611826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:11.850 [2024-11-20 05:19:08.611879] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339436 ] 00:21:11.850 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.850 [2024-11-20 05:19:08.664584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:12.110 [2024-11-20 05:19:08.734757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.110 [2024-11-20 05:19:08.734760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.383 bdev Nvme0n1 reports 1 memory domains 00:21:17.383 bdev Nvme0n1 supports RDMA memory domain 00:21:17.383 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:17.383 ========================================================================== 00:21:17.383 Latency [us] 00:21:17.383 IOPS MiB/s Average min max 00:21:17.383 Core 2: 20911.11 81.68 764.45 260.63 10461.69 00:21:17.383 Core 3: 21265.44 83.07 751.66 235.75 10348.56 00:21:17.383 ========================================================================== 00:21:17.383 Total : 42176.56 164.75 758.00 235.75 10461.69 00:21:17.383 00:21:17.383 Total operations: 210922, translate 210922 pull_push 0 memzero 0 00:21:17.383 05:19:14 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:21:17.383 05:19:14 -- host/dma.sh@107 -- # gen_malloc_json 00:21:17.383 05:19:14 -- host/dma.sh@21 -- # jq . 00:21:17.383 [2024-11-20 05:19:14.173697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:17.383 [2024-11-20 05:19:14.173751] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340320 ] 00:21:17.383 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.642 [2024-11-20 05:19:14.226075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:17.642 [2024-11-20 05:19:14.292304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.642 [2024-11-20 05:19:14.292307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.920 bdev Malloc0 reports 1 memory domains 00:21:22.920 bdev Malloc0 doesn't support RDMA memory domain 00:21:22.920 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:22.920 ========================================================================== 00:21:22.920 Latency [us] 00:21:22.920 IOPS MiB/s Average min max 00:21:22.920 Core 2: 14803.06 57.82 1080.13 350.95 1400.40 00:21:22.920 Core 3: 14781.87 57.74 1081.65 407.33 1840.52 00:21:22.920 ========================================================================== 00:21:22.920 Total : 29584.92 115.57 1080.89 350.95 1840.52 00:21:22.920 00:21:22.920 Total operations: 147974, translate 0 pull_push 591896 memzero 0 00:21:22.920 05:19:19 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:21:22.920 05:19:19 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:21:22.920 05:19:19 -- host/dma.sh@48 -- # local subsystem=0 00:21:22.920 05:19:19 -- host/dma.sh@50 -- # jq . 00:21:22.920 Ignoring -M option 00:21:22.920 [2024-11-20 05:19:19.663827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:22.920 [2024-11-20 05:19:19.663876] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341146 ] 00:21:22.921 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.921 [2024-11-20 05:19:19.715389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:23.180 [2024-11-20 05:19:19.781812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.180 [2024-11-20 05:19:19.781814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.180 [2024-11-20 05:19:19.988652] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:21:28.456 [2024-11-20 05:19:25.017140] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:21:28.456 bdev af90b649-2d85-4ad7-aded-3822eb344b45 reports 1 memory domains 00:21:28.456 bdev af90b649-2d85-4ad7-aded-3822eb344b45 supports RDMA memory domain 00:21:28.456 Initialization complete, running randread IO for 5 sec on 2 cores 00:21:28.456 ========================================================================== 00:21:28.456 Latency [us] 00:21:28.456 IOPS MiB/s Average min max 00:21:28.456 Core 2: 73877.69 288.58 215.74 67.41 3414.94 00:21:28.456 Core 3: 68802.98 268.76 231.64 70.85 3549.99 00:21:28.456 ========================================================================== 00:21:28.456 Total : 142680.67 557.35 223.41 67.41 3549.99 00:21:28.456 00:21:28.456 Total operations: 713472, translate 0 pull_push 0 memzero 713472 00:21:28.456 05:19:25 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:21:28.456 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.715 [2024-11-20 05:19:25.325326] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:31.249 Initializing NVMe Controllers 00:21:31.249 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:21:31.249 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:21:31.249 Initialization complete. Launching workers. 00:21:31.249 ======================================================== 00:21:31.249 Latency(us) 00:21:31.249 Device Information : IOPS MiB/s Average min max 00:21:31.249 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7995.90 6418.91 10969.60 00:21:31.249 ======================================================== 00:21:31.249 Total : 2016.00 7.88 7995.90 6418.91 10969.60 00:21:31.249 00:21:31.249 05:19:27 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:21:31.249 05:19:27 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:21:31.249 05:19:27 -- host/dma.sh@48 -- # local subsystem=0 00:21:31.249 05:19:27 -- host/dma.sh@50 -- # jq . 00:21:31.249 [2024-11-20 05:19:27.651094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:31.249 [2024-11-20 05:19:27.651142] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342519 ] 00:21:31.249 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.250 [2024-11-20 05:19:27.700421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:31.250 [2024-11-20 05:19:27.769770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.250 [2024-11-20 05:19:27.769774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.250 [2024-11-20 05:19:27.975081] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:21:36.525 [2024-11-20 05:19:33.007440] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:21:36.525 bdev ebd0381a-27e0-4dd7-ad0d-c32aeadc2a2a reports 1 memory domains 00:21:36.525 bdev ebd0381a-27e0-4dd7-ad0d-c32aeadc2a2a supports RDMA memory domain 00:21:36.525 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:36.525 ========================================================================== 00:21:36.525 Latency [us] 00:21:36.525 IOPS MiB/s Average min max 00:21:36.525 Core 2: 19733.45 77.08 809.70 48.50 15587.19 00:21:36.525 Core 3: 20024.85 78.22 797.98 13.87 15244.90 00:21:36.525 ========================================================================== 00:21:36.525 Total : 39758.30 155.31 803.80 13.87 15587.19 00:21:36.525 00:21:36.525 Total operations: 198932, translate 198830 pull_push 0 memzero 102 00:21:36.525 05:19:33 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:21:36.525 05:19:33 -- host/dma.sh@120 -- # nvmftestfini 00:21:36.525 05:19:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:36.525 05:19:33 -- nvmf/common.sh@116 -- # sync 00:21:36.525 05:19:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:36.525 05:19:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:36.525 05:19:33 -- nvmf/common.sh@119 -- # set +e 00:21:36.525 05:19:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:36.525 05:19:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:36.525 rmmod nvme_rdma 00:21:36.525 rmmod nvme_fabrics 00:21:36.525 05:19:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:36.525 05:19:33 -- nvmf/common.sh@123 -- # set -e 00:21:36.525 05:19:33 -- nvmf/common.sh@124 -- # return 0 00:21:36.525 05:19:33 -- nvmf/common.sh@477 -- # '[' -n 339245 ']' 00:21:36.525 05:19:33 -- nvmf/common.sh@478 -- # killprocess 339245 00:21:36.525 05:19:33 -- common/autotest_common.sh@936 -- # '[' -z 339245 ']' 00:21:36.525 05:19:33 -- common/autotest_common.sh@940 -- # kill -0 339245 00:21:36.525 05:19:33 -- common/autotest_common.sh@941 -- # uname 00:21:36.525 05:19:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:36.525 05:19:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 339245 00:21:36.525 05:19:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:36.525 05:19:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:36.525 05:19:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 339245' 00:21:36.525 killing process with pid 339245 00:21:36.525 05:19:33 -- common/autotest_common.sh@955 -- # kill 339245 00:21:36.525 05:19:33 -- common/autotest_common.sh@960 -- # wait 339245 00:21:37.094 05:19:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.094 05:19:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:37.094 00:21:37.094 real 0m31.574s 00:21:37.094 user 1m36.231s 00:21:37.094 sys 0m5.144s 00:21:37.094 05:19:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:37.094 05:19:33 -- common/autotest_common.sh@10 -- # set +x 00:21:37.094 ************************************ 00:21:37.094 END TEST dma 00:21:37.094 ************************************ 00:21:37.094 05:19:33 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:21:37.094 05:19:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:37.094 05:19:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.094 05:19:33 -- common/autotest_common.sh@10 -- # set +x 00:21:37.094 ************************************ 00:21:37.094 START TEST nvmf_identify 00:21:37.094 ************************************ 00:21:37.094 05:19:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:21:37.094 * Looking for test storage... 00:21:37.094 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:21:37.094 05:19:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:37.094 05:19:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:37.094 05:19:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:37.094 05:19:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:37.094 05:19:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:37.094 05:19:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:37.094 05:19:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:37.094 05:19:33 -- scripts/common.sh@335 -- # IFS=.-: 00:21:37.094 05:19:33 -- scripts/common.sh@335 -- # read -ra ver1 00:21:37.094 05:19:33 -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.094 05:19:33 -- scripts/common.sh@336 -- # read -ra ver2 00:21:37.094 05:19:33 -- scripts/common.sh@337 -- # local 'op=<' 00:21:37.094 05:19:33 -- scripts/common.sh@339 -- # ver1_l=2 00:21:37.094 05:19:33 -- scripts/common.sh@340 -- # ver2_l=1 00:21:37.094 05:19:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:37.094 05:19:33 -- scripts/common.sh@343 -- # case "$op" in 00:21:37.094 05:19:33 -- scripts/common.sh@344 -- # : 1 00:21:37.094 05:19:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:37.094 05:19:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.094 05:19:33 -- scripts/common.sh@364 -- # decimal 1 00:21:37.094 05:19:33 -- scripts/common.sh@352 -- # local d=1 00:21:37.094 05:19:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.094 05:19:33 -- scripts/common.sh@354 -- # echo 1 00:21:37.094 05:19:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:37.094 05:19:33 -- scripts/common.sh@365 -- # decimal 2 00:21:37.094 05:19:33 -- scripts/common.sh@352 -- # local d=2 00:21:37.094 05:19:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.094 05:19:33 -- scripts/common.sh@354 -- # echo 2 00:21:37.094 05:19:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:37.094 05:19:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:37.094 05:19:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:37.094 05:19:33 -- scripts/common.sh@367 -- # return 0 00:21:37.094 05:19:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.094 05:19:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:37.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.094 --rc genhtml_branch_coverage=1 00:21:37.094 --rc genhtml_function_coverage=1 00:21:37.094 --rc genhtml_legend=1 00:21:37.094 --rc geninfo_all_blocks=1 00:21:37.094 --rc geninfo_unexecuted_blocks=1 00:21:37.094 00:21:37.094 ' 00:21:37.095 05:19:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:37.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.095 --rc genhtml_branch_coverage=1 00:21:37.095 --rc genhtml_function_coverage=1 00:21:37.095 --rc genhtml_legend=1 00:21:37.095 --rc geninfo_all_blocks=1 00:21:37.095 --rc geninfo_unexecuted_blocks=1 00:21:37.095 00:21:37.095 ' 00:21:37.095 05:19:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:37.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.095 --rc genhtml_branch_coverage=1 00:21:37.095 --rc genhtml_function_coverage=1 00:21:37.095 --rc genhtml_legend=1 00:21:37.095 --rc geninfo_all_blocks=1 00:21:37.095 --rc geninfo_unexecuted_blocks=1 00:21:37.095 00:21:37.095 ' 00:21:37.095 05:19:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:37.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.095 --rc genhtml_branch_coverage=1 00:21:37.095 --rc genhtml_function_coverage=1 00:21:37.095 --rc genhtml_legend=1 00:21:37.095 --rc geninfo_all_blocks=1 00:21:37.095 --rc geninfo_unexecuted_blocks=1 00:21:37.095 00:21:37.095 ' 00:21:37.095 05:19:33 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.095 05:19:33 -- nvmf/common.sh@7 -- # uname -s 00:21:37.095 05:19:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.095 05:19:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.095 05:19:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.095 05:19:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.095 05:19:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.095 05:19:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.095 05:19:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.095 05:19:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.095 05:19:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.095 05:19:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.095 05:19:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:37.095 05:19:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:37.095 05:19:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.095 05:19:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.095 05:19:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:37.095 05:19:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:21:37.095 05:19:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.095 05:19:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.095 05:19:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.095 05:19:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.095 05:19:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.095 05:19:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.095 05:19:33 -- paths/export.sh@5 -- # export PATH 00:21:37.095 05:19:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.095 05:19:33 -- nvmf/common.sh@46 -- # : 0 00:21:37.095 05:19:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:37.095 05:19:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:37.095 05:19:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:37.095 05:19:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.095 05:19:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.095 05:19:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:37.095 05:19:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:37.095 05:19:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:37.095 05:19:33 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:37.095 05:19:33 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:37.095 05:19:33 -- host/identify.sh@14 -- # nvmftestinit 00:21:37.095 05:19:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:37.095 05:19:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.095 05:19:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:37.095 05:19:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:37.095 05:19:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:37.095 05:19:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.095 05:19:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.095 05:19:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.095 05:19:33 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:37.095 05:19:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:37.095 05:19:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:37.095 05:19:33 -- common/autotest_common.sh@10 -- # set +x 00:21:42.412 05:19:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:42.412 05:19:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:42.412 05:19:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:42.412 05:19:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:42.412 05:19:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:42.412 05:19:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:42.412 05:19:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:42.412 05:19:38 -- nvmf/common.sh@294 -- # net_devs=() 00:21:42.412 05:19:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:42.412 05:19:38 -- nvmf/common.sh@295 -- # e810=() 00:21:42.412 05:19:38 -- nvmf/common.sh@295 -- # local -ga e810 00:21:42.412 05:19:38 -- nvmf/common.sh@296 -- # x722=() 00:21:42.412 05:19:38 -- nvmf/common.sh@296 -- # local -ga x722 00:21:42.412 05:19:38 -- nvmf/common.sh@297 -- # mlx=() 00:21:42.412 05:19:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:42.412 05:19:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.412 05:19:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:42.412 05:19:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:42.412 05:19:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:42.412 05:19:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:42.412 05:19:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:42.412 05:19:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:42.412 05:19:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:42.412 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:42.412 05:19:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:42.412 05:19:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:42.412 05:19:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:42.412 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:42.412 05:19:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:42.412 05:19:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:42.412 05:19:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:21:42.412 05:19:38 -- nvmf/common.sh@376 -- # modinfo irdma 00:21:42.412 05:19:38 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:21:42.412 05:19:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:42.412 05:19:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.412 05:19:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:42.412 05:19:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.412 05:19:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:42.412 Found net devices under 0000:af:00.0: cvl_0_0 00:21:42.412 05:19:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.412 05:19:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:42.412 05:19:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.412 05:19:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:42.412 05:19:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.412 05:19:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:42.412 Found net devices under 0000:af:00.1: cvl_0_1 00:21:42.412 05:19:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.412 05:19:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:42.412 05:19:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:42.412 05:19:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:42.412 05:19:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:42.412 05:19:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:42.412 05:19:38 -- nvmf/common.sh@57 -- # uname 00:21:42.412 05:19:38 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:42.412 05:19:38 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:42.412 05:19:38 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:42.412 05:19:38 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:42.412 05:19:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:42.412 05:19:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:42.412 05:19:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:42.412 05:19:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:42.412 05:19:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:42.412 05:19:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:42.412 05:19:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:42.412 05:19:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:42.412 05:19:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:42.412 05:19:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:42.413 05:19:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:42.413 05:19:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:42.413 05:19:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:21:42.413 05:19:39 -- nvmf/common.sh@104 -- # continue 2 00:21:42.413 05:19:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:21:42.413 05:19:39 -- nvmf/common.sh@104 -- # continue 2 00:21:42.413 05:19:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:42.413 05:19:39 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:21:42.413 05:19:39 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:42.413 05:19:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:42.413 05:19:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:21:42.413 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:42.413 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:21:42.413 altname enp175s0f0np0 00:21:42.413 altname ens801f0np0 00:21:42.413 inet 192.168.100.8/24 scope global cvl_0_0 00:21:42.413 valid_lft forever preferred_lft forever 00:21:42.413 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:21:42.413 valid_lft forever preferred_lft forever 00:21:42.413 05:19:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:42.413 05:19:39 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:21:42.413 05:19:39 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:42.413 05:19:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:42.413 05:19:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:21:42.413 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:42.413 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:21:42.413 altname enp175s0f1np1 00:21:42.413 altname ens801f1np1 00:21:42.413 inet 192.168.100.9/24 scope global cvl_0_1 00:21:42.413 valid_lft forever preferred_lft forever 00:21:42.413 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:21:42.413 valid_lft forever preferred_lft forever 00:21:42.413 05:19:39 -- nvmf/common.sh@410 -- # return 0 00:21:42.413 05:19:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:42.413 05:19:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:42.413 05:19:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:42.413 05:19:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:42.413 05:19:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:42.413 05:19:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:42.413 05:19:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:42.413 05:19:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:42.413 05:19:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:42.413 05:19:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:21:42.413 05:19:39 -- nvmf/common.sh@104 -- # continue 2 00:21:42.413 05:19:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:42.413 05:19:39 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:42.413 05:19:39 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:21:42.413 05:19:39 -- nvmf/common.sh@104 -- # continue 2 00:21:42.413 05:19:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:42.413 05:19:39 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:21:42.413 05:19:39 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:42.413 05:19:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:42.413 05:19:39 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:21:42.413 05:19:39 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:42.413 05:19:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:42.413 05:19:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:42.413 192.168.100.9' 00:21:42.413 05:19:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:42.413 192.168.100.9' 00:21:42.413 05:19:39 -- nvmf/common.sh@445 -- # head -n 1 00:21:42.413 05:19:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:42.413 05:19:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:42.413 192.168.100.9' 00:21:42.413 05:19:39 -- nvmf/common.sh@446 -- # tail -n +2 00:21:42.413 05:19:39 -- nvmf/common.sh@446 -- # head -n 1 00:21:42.413 05:19:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:42.413 05:19:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:42.413 05:19:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:42.413 05:19:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:42.413 05:19:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:42.413 05:19:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:42.413 05:19:39 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:42.413 05:19:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.413 05:19:39 -- common/autotest_common.sh@10 -- # set +x 00:21:42.413 05:19:39 -- host/identify.sh@19 -- # nvmfpid=346487 00:21:42.413 05:19:39 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:42.413 05:19:39 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.413 05:19:39 -- host/identify.sh@23 -- # waitforlisten 346487 00:21:42.413 05:19:39 -- common/autotest_common.sh@829 -- # '[' -z 346487 ']' 00:21:42.413 05:19:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.413 05:19:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.413 05:19:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.413 05:19:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.413 05:19:39 -- common/autotest_common.sh@10 -- # set +x 00:21:42.413 [2024-11-20 05:19:39.198863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:42.413 [2024-11-20 05:19:39.198906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.413 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.672 [2024-11-20 05:19:39.254552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.672 [2024-11-20 05:19:39.330266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:42.672 [2024-11-20 05:19:39.330374] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.672 [2024-11-20 05:19:39.330381] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.672 [2024-11-20 05:19:39.330387] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.672 [2024-11-20 05:19:39.330435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.672 [2024-11-20 05:19:39.330551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.672 [2024-11-20 05:19:39.330639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.672 [2024-11-20 05:19:39.330640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.240 05:19:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.240 05:19:40 -- common/autotest_common.sh@862 -- # return 0 00:21:43.240 05:19:40 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:43.240 05:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.240 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.240 [2024-11-20 05:19:40.043546] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x150f100/0x150e740) succeed. 00:21:43.240 [2024-11-20 05:19:40.054064] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1510470/0x150ecc0) succeed. 00:21:43.240 [2024-11-20 05:19:40.054087] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:21:43.240 05:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.240 05:19:40 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:43.240 05:19:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.240 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 05:19:40 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:43.504 05:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.504 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 Malloc0 00:21:43.504 05:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.504 05:19:40 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.504 05:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.504 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 05:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.504 05:19:40 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:43.504 05:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.504 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 05:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.504 05:19:40 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:43.504 05:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.504 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 [2024-11-20 05:19:40.145500] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:43.504 05:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.504 05:19:40 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:43.504 05:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.504 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 05:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.504 05:19:40 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:43.504 05:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.504 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 [2024-11-20 05:19:40.161442] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:43.504 [ 00:21:43.504 { 00:21:43.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:43.504 "subtype": "Discovery", 00:21:43.504 "listen_addresses": [ 00:21:43.504 { 00:21:43.504 "transport": "RDMA", 00:21:43.504 "trtype": "RDMA", 00:21:43.504 "adrfam": "IPv4", 00:21:43.504 "traddr": "192.168.100.8", 00:21:43.504 "trsvcid": "4420" 00:21:43.504 } 00:21:43.504 ], 00:21:43.504 "allow_any_host": true, 00:21:43.504 "hosts": [] 00:21:43.504 }, 00:21:43.504 { 00:21:43.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.504 "subtype": "NVMe", 00:21:43.504 "listen_addresses": [ 00:21:43.504 { 00:21:43.504 "transport": "RDMA", 00:21:43.505 "trtype": "RDMA", 00:21:43.505 "adrfam": "IPv4", 00:21:43.505 "traddr": "192.168.100.8", 00:21:43.505 "trsvcid": "4420" 00:21:43.505 } 00:21:43.505 ], 00:21:43.505 "allow_any_host": true, 00:21:43.505 "hosts": [], 00:21:43.505 "serial_number": "SPDK00000000000001", 00:21:43.505 "model_number": "SPDK bdev Controller", 00:21:43.505 "max_namespaces": 32, 00:21:43.505 "min_cntlid": 1, 00:21:43.505 "max_cntlid": 65519, 00:21:43.505 "namespaces": [ 00:21:43.505 { 00:21:43.505 "nsid": 1, 00:21:43.505 "bdev_name": "Malloc0", 00:21:43.505 "name": "Malloc0", 00:21:43.505 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:43.505 "eui64": "ABCDEF0123456789", 00:21:43.505 "uuid": "e1bb8d64-ba60-4202-b384-9afd18454dbc" 00:21:43.505 } 00:21:43.505 ] 00:21:43.505 } 00:21:43.505 ] 00:21:43.505 05:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.505 05:19:40 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:43.505 [2024-11-20 05:19:40.197056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:43.505 [2024-11-20 05:19:40.197103] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346733 ] 00:21:43.505 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.505 [2024-11-20 05:19:40.229278] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:43.505 [2024-11-20 05:19:40.229342] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:21:43.505 [2024-11-20 05:19:40.229359] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:21:43.505 [2024-11-20 05:19:40.229363] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:21:43.505 [2024-11-20 05:19:40.229391] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:43.505 [2024-11-20 05:19:40.245307] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:21:43.505 [2024-11-20 05:19:40.260355] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:43.505 [2024-11-20 05:19:40.260365] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:21:43.505 [2024-11-20 05:19:40.260370] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260375] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260380] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260384] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260388] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260393] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260397] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260401] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260405] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260410] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260414] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260418] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260422] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260427] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260431] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260435] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260439] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260444] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260448] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260455] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260459] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260463] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260467] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260472] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260476] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260480] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260485] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260489] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260493] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260497] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260502] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260505] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:21:43.505 [2024-11-20 05:19:40.260510] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:43.505 [2024-11-20 05:19:40.260512] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:21:43.505 [2024-11-20 05:19:40.260528] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.260540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.266054] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.505 [2024-11-20 05:19:40.266062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:43.505 [2024-11-20 05:19:40.266068] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.266074] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:43.505 [2024-11-20 05:19:40.266079] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:43.505 [2024-11-20 05:19:40.266084] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:43.505 [2024-11-20 05:19:40.266094] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.266101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.505 [2024-11-20 05:19:40.266125] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.505 [2024-11-20 05:19:40.266130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:21:43.505 [2024-11-20 05:19:40.266135] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:43.505 [2024-11-20 05:19:40.266139] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.266144] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:43.505 [2024-11-20 05:19:40.266152] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.266158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.505 [2024-11-20 05:19:40.266181] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.505 [2024-11-20 05:19:40.266186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:21:43.505 [2024-11-20 05:19:40.266191] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:43.505 [2024-11-20 05:19:40.266195] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x9dd9983a 00:21:43.505 [2024-11-20 05:19:40.266200] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:43.505 [2024-11-20 05:19:40.266206] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.506 [2024-11-20 05:19:40.266235] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.506 [2024-11-20 05:19:40.266240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:43.506 [2024-11-20 05:19:40.266244] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:43.506 [2024-11-20 05:19:40.266249] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266255] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.506 [2024-11-20 05:19:40.266289] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.506 [2024-11-20 05:19:40.266293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:43.506 [2024-11-20 05:19:40.266298] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:43.506 [2024-11-20 05:19:40.266302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:43.506 [2024-11-20 05:19:40.266306] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266311] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:43.506 [2024-11-20 05:19:40.266415] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:43.506 [2024-11-20 05:19:40.266419] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:43.506 [2024-11-20 05:19:40.266426] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.506 [2024-11-20 05:19:40.266465] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.506 [2024-11-20 05:19:40.266469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:43.506 [2024-11-20 05:19:40.266473] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:43.506 [2024-11-20 05:19:40.266479] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266486] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.506 [2024-11-20 05:19:40.266512] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.506 [2024-11-20 05:19:40.266517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:43.506 [2024-11-20 05:19:40.266521] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:43.506 [2024-11-20 05:19:40.266525] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:43.506 [2024-11-20 05:19:40.266529] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266534] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:43.506 [2024-11-20 05:19:40.266540] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:43.506 [2024-11-20 05:19:40.266548] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266595] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.506 [2024-11-20 05:19:40.266599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:43.506 [2024-11-20 05:19:40.266606] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:43.506 [2024-11-20 05:19:40.266610] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:43.506 [2024-11-20 05:19:40.266614] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:43.506 [2024-11-20 05:19:40.266618] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 6 00:21:43.506 [2024-11-20 05:19:40.266622] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:43.506 [2024-11-20 05:19:40.266626] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:43.506 [2024-11-20 05:19:40.266630] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266638] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:43.506 [2024-11-20 05:19:40.266644] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.506 [2024-11-20 05:19:40.266678] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.506 [2024-11-20 05:19:40.266682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:43.506 [2024-11-20 05:19:40.266690] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.506 [2024-11-20 05:19:40.266701] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.506 [2024-11-20 05:19:40.266712] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.506 [2024-11-20 05:19:40.266722] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.506 [2024-11-20 05:19:40.266731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:43.506 [2024-11-20 05:19:40.266735] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266744] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:43.506 [2024-11-20 05:19:40.266749] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.506 [2024-11-20 05:19:40.266786] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.506 [2024-11-20 05:19:40.266791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:21:43.506 [2024-11-20 05:19:40.266795] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:43.506 [2024-11-20 05:19:40.266800] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:43.506 [2024-11-20 05:19:40.266804] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266811] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266848] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.506 [2024-11-20 05:19:40.266853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:43.506 [2024-11-20 05:19:40.266858] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266866] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:43.506 [2024-11-20 05:19:40.266884] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x9dd9983a 00:21:43.506 [2024-11-20 05:19:40.266891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.266897] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.266904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.507 [2024-11-20 05:19:40.266931] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.507 [2024-11-20 05:19:40.266936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:43.507 [2024-11-20 05:19:40.266945] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.266951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.266955] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.266960] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.507 [2024-11-20 05:19:40.266964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:43.507 [2024-11-20 05:19:40.266968] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.266992] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.507 [2024-11-20 05:19:40.266997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:43.507 [2024-11-20 05:19:40.267004] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.267010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.267014] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x9dd9983a 00:21:43.507 [2024-11-20 05:19:40.267040] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.507 [2024-11-20 05:19:40.267045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:43.507 [2024-11-20 05:19:40.267061] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x9dd9983a 00:21:43.507 ===================================================== 00:21:43.507 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:43.507 ===================================================== 00:21:43.507 Controller Capabilities/Features 00:21:43.507 ================================ 00:21:43.507 Vendor ID: 0000 00:21:43.507 Subsystem Vendor ID: 0000 00:21:43.507 Serial Number: .................... 00:21:43.507 Model Number: ........................................ 00:21:43.507 Firmware Version: 24.01.1 00:21:43.507 Recommended Arb Burst: 0 00:21:43.507 IEEE OUI Identifier: 00 00 00 00:21:43.507 Multi-path I/O 00:21:43.507 May have multiple subsystem ports: No 00:21:43.507 May have multiple controllers: No 00:21:43.507 Associated with SR-IOV VF: No 00:21:43.507 Max Data Transfer Size: 131072 00:21:43.507 Max Number of Namespaces: 0 00:21:43.507 Max Number of I/O Queues: 1024 00:21:43.507 NVMe Specification Version (VS): 1.3 00:21:43.507 NVMe Specification Version (Identify): 1.3 00:21:43.507 Maximum Queue Entries: 128 00:21:43.507 Contiguous Queues Required: Yes 00:21:43.507 Arbitration Mechanisms Supported 00:21:43.507 Weighted Round Robin: Not Supported 00:21:43.507 Vendor Specific: Not Supported 00:21:43.507 Reset Timeout: 15000 ms 00:21:43.507 Doorbell Stride: 4 bytes 00:21:43.507 NVM Subsystem Reset: Not Supported 00:21:43.507 Command Sets Supported 00:21:43.507 NVM Command Set: Supported 00:21:43.507 Boot Partition: Not Supported 00:21:43.507 Memory Page Size Minimum: 4096 bytes 00:21:43.507 Memory Page Size Maximum: 4096 bytes 00:21:43.507 Persistent Memory Region: Not Supported 00:21:43.507 Optional Asynchronous Events Supported 00:21:43.507 Namespace Attribute Notices: Not Supported 00:21:43.507 Firmware Activation Notices: Not Supported 00:21:43.507 ANA Change Notices: Not Supported 00:21:43.507 PLE Aggregate Log Change Notices: Not Supported 00:21:43.507 LBA Status Info Alert Notices: Not Supported 00:21:43.507 EGE Aggregate Log Change Notices: Not Supported 00:21:43.507 Normal NVM Subsystem Shutdown event: Not Supported 00:21:43.507 Zone Descriptor Change Notices: Not Supported 00:21:43.507 Discovery Log Change Notices: Supported 00:21:43.507 Controller Attributes 00:21:43.507 128-bit Host Identifier: Not Supported 00:21:43.507 Non-Operational Permissive Mode: Not Supported 00:21:43.507 NVM Sets: Not Supported 00:21:43.507 Read Recovery Levels: Not Supported 00:21:43.507 Endurance Groups: Not Supported 00:21:43.507 Predictable Latency Mode: Not Supported 00:21:43.507 Traffic Based Keep ALive: Not Supported 00:21:43.507 Namespace Granularity: Not Supported 00:21:43.507 SQ Associations: Not Supported 00:21:43.507 UUID List: Not Supported 00:21:43.507 Multi-Domain Subsystem: Not Supported 00:21:43.507 Fixed Capacity Management: Not Supported 00:21:43.507 Variable Capacity Management: Not Supported 00:21:43.507 Delete Endurance Group: Not Supported 00:21:43.507 Delete NVM Set: Not Supported 00:21:43.507 Extended LBA Formats Supported: Not Supported 00:21:43.507 Flexible Data Placement Supported: Not Supported 00:21:43.507 00:21:43.507 Controller Memory Buffer Support 00:21:43.507 ================================ 00:21:43.507 Supported: No 00:21:43.507 00:21:43.507 Persistent Memory Region Support 00:21:43.507 ================================ 00:21:43.507 Supported: No 00:21:43.507 00:21:43.507 Admin Command Set Attributes 00:21:43.507 ============================ 00:21:43.507 Security Send/Receive: Not Supported 00:21:43.507 Format NVM: Not Supported 00:21:43.507 Firmware Activate/Download: Not Supported 00:21:43.507 Namespace Management: Not Supported 00:21:43.507 Device Self-Test: Not Supported 00:21:43.507 Directives: Not Supported 00:21:43.507 NVMe-MI: Not Supported 00:21:43.507 Virtualization Management: Not Supported 00:21:43.507 Doorbell Buffer Config: Not Supported 00:21:43.507 Get LBA Status Capability: Not Supported 00:21:43.507 Command & Feature Lockdown Capability: Not Supported 00:21:43.507 Abort Command Limit: 1 00:21:43.507 Async Event Request Limit: 4 00:21:43.507 Number of Firmware Slots: N/A 00:21:43.507 Firmware Slot 1 Read-Only: N/A 00:21:43.507 Firmware Activation Without Reset: N/A 00:21:43.507 Multiple Update Detection Support: N/A 00:21:43.507 Firmware Update Granularity: No Information Provided 00:21:43.507 Per-Namespace SMART Log: No 00:21:43.507 Asymmetric Namespace Access Log Page: Not Supported 00:21:43.507 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:43.507 Command Effects Log Page: Not Supported 00:21:43.507 Get Log Page Extended Data: Supported 00:21:43.507 Telemetry Log Pages: Not Supported 00:21:43.507 Persistent Event Log Pages: Not Supported 00:21:43.507 Supported Log Pages Log Page: May Support 00:21:43.507 Commands Supported & Effects Log Page: Not Supported 00:21:43.507 Feature Identifiers & Effects Log Page:May Support 00:21:43.507 NVMe-MI Commands & Effects Log Page: May Support 00:21:43.507 Data Area 4 for Telemetry Log: Not Supported 00:21:43.507 Error Log Page Entries Supported: 128 00:21:43.507 Keep Alive: Not Supported 00:21:43.507 00:21:43.507 NVM Command Set Attributes 00:21:43.507 ========================== 00:21:43.507 Submission Queue Entry Size 00:21:43.507 Max: 1 00:21:43.507 Min: 1 00:21:43.507 Completion Queue Entry Size 00:21:43.507 Max: 1 00:21:43.507 Min: 1 00:21:43.507 Number of Namespaces: 0 00:21:43.507 Compare Command: Not Supported 00:21:43.507 Write Uncorrectable Command: Not Supported 00:21:43.508 Dataset Management Command: Not Supported 00:21:43.508 Write Zeroes Command: Not Supported 00:21:43.508 Set Features Save Field: Not Supported 00:21:43.508 Reservations: Not Supported 00:21:43.508 Timestamp: Not Supported 00:21:43.508 Copy: Not Supported 00:21:43.508 Volatile Write Cache: Not Present 00:21:43.508 Atomic Write Unit (Normal): 1 00:21:43.508 Atomic Write Unit (PFail): 1 00:21:43.508 Atomic Compare & Write Unit: 1 00:21:43.508 Fused Compare & Write: Supported 00:21:43.508 Scatter-Gather List 00:21:43.508 SGL Command Set: Supported 00:21:43.508 SGL Keyed: Supported 00:21:43.508 SGL Bit Bucket Descriptor: Not Supported 00:21:43.508 SGL Metadata Pointer: Not Supported 00:21:43.508 Oversized SGL: Not Supported 00:21:43.508 SGL Metadata Address: Not Supported 00:21:43.508 SGL Offset: Supported 00:21:43.508 Transport SGL Data Block: Not Supported 00:21:43.508 Replay Protected Memory Block: Not Supported 00:21:43.508 00:21:43.508 Firmware Slot Information 00:21:43.508 ========================= 00:21:43.508 Active slot: 0 00:21:43.508 00:21:43.508 00:21:43.508 Error Log 00:21:43.508 ========= 00:21:43.508 00:21:43.508 Active Namespaces 00:21:43.508 ================= 00:21:43.508 Discovery Log Page 00:21:43.508 ================== 00:21:43.508 Generation Counter: 2 00:21:43.508 Number of Records: 2 00:21:43.508 Record Format: 0 00:21:43.508 00:21:43.508 Discovery Log Entry 0 00:21:43.508 ---------------------- 00:21:43.508 Transport Type: 1 (RDMA) 00:21:43.508 Address Family: 1 (IPv4) 00:21:43.508 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:43.508 Entry Flags: 00:21:43.508 Duplicate Returned Information: 1 00:21:43.508 Explicit Persistent Connection Support for Discovery: 1 00:21:43.508 Transport Requirements: 00:21:43.508 Secure Channel: Not Required 00:21:43.508 Port ID: 0 (0x0000) 00:21:43.508 Controller ID: 65535 (0xffff) 00:21:43.508 Admin Max SQ Size: 128 00:21:43.508 Transport Service Identifier: 4420 00:21:43.508 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:43.508 Transport Address: 192.168.100.8 00:21:43.508 Transport Specific Address Subtype - RDMA 00:21:43.508 RDMA QP Service Type: 1 (Reliable Connected) 00:21:43.508 RDMA Provider Type: 1 (No provider specified) 00:21:43.508 RDMA CM Service: 1 (RDMA_CM) 00:21:43.508 Discovery Log Entry 1 00:21:43.508 ---------------------- 00:21:43.508 Transport Type: 1 (RDMA) 00:21:43.508 Address Family: 1 (IPv4) 00:21:43.508 Subsystem Type: 2 (NVM Subsystem) 00:21:43.508 Entry Flags: 00:21:43.508 Duplicate Returned Information: 0 00:21:43.508 Explicit Persistent Connection Support for Discovery: 0 00:21:43.508 Transport Requirements: 00:21:43.508 Secure Channel: Not Required 00:21:43.508 Port ID: 0 (0x0000) 00:21:43.508 Controller ID: 65535 (0xffff) 00:21:43.508 Admin Max SQ Size: [2024-11-20 05:19:40.267132] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:43.508 [2024-11-20 05:19:40.267140] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38368 doesn't match qid 00:21:43.508 [2024-11-20 05:19:40.267151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32734 cdw0:5 sqhd:ae28 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267156] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38368 doesn't match qid 00:21:43.508 [2024-11-20 05:19:40.267162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32734 cdw0:5 sqhd:ae28 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267167] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38368 doesn't match qid 00:21:43.508 [2024-11-20 05:19:40.267172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32734 cdw0:5 sqhd:ae28 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267177] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38368 doesn't match qid 00:21:43.508 [2024-11-20 05:19:40.267182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32734 cdw0:5 sqhd:ae28 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267190] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.508 [2024-11-20 05:19:40.267222] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.508 [2024-11-20 05:19:40.267226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267232] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.508 [2024-11-20 05:19:40.267242] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267274] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.508 [2024-11-20 05:19:40.267278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267283] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:43.508 [2024-11-20 05:19:40.267287] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:43.508 [2024-11-20 05:19:40.267291] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267297] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.508 [2024-11-20 05:19:40.267326] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.508 [2024-11-20 05:19:40.267331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267336] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267343] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.508 [2024-11-20 05:19:40.267375] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.508 [2024-11-20 05:19:40.267380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267384] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267391] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.508 [2024-11-20 05:19:40.267424] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.508 [2024-11-20 05:19:40.267429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267433] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267440] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.508 [2024-11-20 05:19:40.267469] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.508 [2024-11-20 05:19:40.267474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267478] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267487] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.508 [2024-11-20 05:19:40.267493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.508 [2024-11-20 05:19:40.267519] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.508 [2024-11-20 05:19:40.267524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:21:43.508 [2024-11-20 05:19:40.267528] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267535] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267563] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267572] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267579] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267611] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267620] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267627] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267656] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267665] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267672] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267702] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267710] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267717] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267749] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267759] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267766] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267799] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267808] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267814] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267846] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267854] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267861] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267888] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267896] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267903] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267933] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267941] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267948] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.267975] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.267979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.267983] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267990] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.267996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.268028] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.268032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.268038] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.268044] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.268055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.268082] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.268087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.268091] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.268098] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.268103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.268124] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.268129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.268133] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.268140] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.268145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.509 [2024-11-20 05:19:40.268166] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.509 [2024-11-20 05:19:40.268170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:43.509 [2024-11-20 05:19:40.268175] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x9dd9983a 00:21:43.509 [2024-11-20 05:19:40.268182] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268213] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268221] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268228] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268256] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268265] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268272] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268308] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268319] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268325] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268357] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268366] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268373] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268406] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268414] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268421] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268453] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268461] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268468] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268498] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268506] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268513] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268545] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268553] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268560] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268593] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268602] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268609] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268639] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268647] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268654] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.510 [2024-11-20 05:19:40.268686] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.510 [2024-11-20 05:19:40.268690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:21:43.510 [2024-11-20 05:19:40.268694] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x9dd9983a 00:21:43.510 [2024-11-20 05:19:40.268701] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.268734] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.268738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.268743] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268750] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.268779] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.268784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.268788] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268795] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.268823] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.268827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.268832] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268838] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.268870] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.268874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.268878] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268885] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.268920] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.268924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.268928] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268935] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.268970] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.268974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.268978] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268985] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.268991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269013] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.269018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.269022] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269029] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269069] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.269073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.269078] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269085] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269116] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.269121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.269125] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269132] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269171] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.269175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.269179] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269186] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269218] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.269222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.269226] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269233] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269263] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.269267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.269271] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269278] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269308] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.269312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.269317] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269323] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269360] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.511 [2024-11-20 05:19:40.269364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:43.511 [2024-11-20 05:19:40.269368] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269375] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.511 [2024-11-20 05:19:40.269381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.511 [2024-11-20 05:19:40.269411] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269420] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269428] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269462] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269470] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269477] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269507] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269515] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269522] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269552] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269561] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269567] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269600] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269609] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269616] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269644] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269653] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269660] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269689] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269698] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269706] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269738] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269746] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269753] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269781] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269790] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269796] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269828] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269836] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269843] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269874] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269883] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269890] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269923] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269931] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269938] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.269968] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.269972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.269978] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269985] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.269990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.270016] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.270020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.270025] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.270031] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.512 [2024-11-20 05:19:40.270037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.512 [2024-11-20 05:19:40.274053] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.512 [2024-11-20 05:19:40.274060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:21:43.512 [2024-11-20 05:19:40.274064] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x9dd9983a 00:21:43.513 [2024-11-20 05:19:40.274071] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x9dd9983a 00:21:43.513 [2024-11-20 05:19:40.274077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.513 [2024-11-20 05:19:40.274100] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.513 [2024-11-20 05:19:40.274104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000d p:0 m:0 dnr:0 00:21:43.513 [2024-11-20 05:19:40.274109] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x9dd9983a 00:21:43.513 [2024-11-20 05:19:40.274114] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:43.513 128 00:21:43.513 Transport Service Identifier: 4420 00:21:43.513 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:43.513 Transport Address: 192.168.100.8 00:21:43.513 Transport Specific Address Subtype - RDMA 00:21:43.513 RDMA QP Service Type: 1 (Reliable Connected) 00:21:43.513 RDMA Provider Type: 1 (No provider specified) 00:21:43.513 RDMA CM Service: 1 (RDMA_CM) 00:21:43.513 05:19:40 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:43.777 [2024-11-20 05:19:40.337679] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:43.777 [2024-11-20 05:19:40.337725] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346739 ] 00:21:43.777 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.777 [2024-11-20 05:19:40.369813] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:43.777 [2024-11-20 05:19:40.369868] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:21:43.777 [2024-11-20 05:19:40.369883] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:21:43.777 [2024-11-20 05:19:40.369887] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:21:43.777 [2024-11-20 05:19:40.369907] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:43.777 [2024-11-20 05:19:40.386317] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:21:43.777 [2024-11-20 05:19:40.401350] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:43.777 [2024-11-20 05:19:40.401359] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:21:43.777 [2024-11-20 05:19:40.401365] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401370] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401374] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401378] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401383] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401387] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401391] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401396] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401400] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401404] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401408] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401413] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401417] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401421] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401425] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401430] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401434] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401438] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401443] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401447] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401451] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401455] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401460] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401464] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401468] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401472] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401479] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401483] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401487] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401492] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401496] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401500] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:21:43.777 [2024-11-20 05:19:40.401504] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:43.777 [2024-11-20 05:19:40.401507] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:21:43.777 [2024-11-20 05:19:40.401519] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.401528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.407053] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.777 [2024-11-20 05:19:40.407060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:43.777 [2024-11-20 05:19:40.407066] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.407071] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:43.777 [2024-11-20 05:19:40.407076] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:43.777 [2024-11-20 05:19:40.407080] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:43.777 [2024-11-20 05:19:40.407089] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.407096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.777 [2024-11-20 05:19:40.407123] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.777 [2024-11-20 05:19:40.407128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:21:43.777 [2024-11-20 05:19:40.407132] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:43.777 [2024-11-20 05:19:40.407137] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.407142] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:43.777 [2024-11-20 05:19:40.407147] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.407154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.777 [2024-11-20 05:19:40.407180] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.777 [2024-11-20 05:19:40.407184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:21:43.777 [2024-11-20 05:19:40.407189] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:43.777 [2024-11-20 05:19:40.407193] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.407198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:43.777 [2024-11-20 05:19:40.407205] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.777 [2024-11-20 05:19:40.407211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.777 [2024-11-20 05:19:40.407233] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:43.778 [2024-11-20 05:19:40.407245] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407252] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.778 [2024-11-20 05:19:40.407288] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407296] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:43.778 [2024-11-20 05:19:40.407300] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:43.778 [2024-11-20 05:19:40.407304] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407309] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:43.778 [2024-11-20 05:19:40.407414] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:43.778 [2024-11-20 05:19:40.407417] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:43.778 [2024-11-20 05:19:40.407424] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.778 [2024-11-20 05:19:40.407461] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407470] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:43.778 [2024-11-20 05:19:40.407474] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407481] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.778 [2024-11-20 05:19:40.407513] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407521] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:43.778 [2024-11-20 05:19:40.407525] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407530] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407535] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:43.778 [2024-11-20 05:19:40.407544] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407552] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407604] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407614] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:43.778 [2024-11-20 05:19:40.407618] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:43.778 [2024-11-20 05:19:40.407622] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:43.778 [2024-11-20 05:19:40.407626] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 6 00:21:43.778 [2024-11-20 05:19:40.407629] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:43.778 [2024-11-20 05:19:40.407634] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407638] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407644] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407650] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.778 [2024-11-20 05:19:40.407684] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407694] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.778 [2024-11-20 05:19:40.407705] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.778 [2024-11-20 05:19:40.407715] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.778 [2024-11-20 05:19:40.407725] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.778 [2024-11-20 05:19:40.407736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407740] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407748] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407753] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.778 [2024-11-20 05:19:40.407782] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407790] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:43.778 [2024-11-20 05:19:40.407794] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407799] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407811] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407816] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.778 [2024-11-20 05:19:40.407850] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407902] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407906] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407919] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407953] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.407957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.407966] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:43.778 [2024-11-20 05:19:40.407975] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407979] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.407985] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.407994] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.408000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.408034] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.408038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.408054] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.408059] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.408065] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.408071] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.408077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.408110] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.778 [2024-11-20 05:19:40.408114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:43.778 [2024-11-20 05:19:40.408120] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.408125] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x419e8d07 00:21:43.778 [2024-11-20 05:19:40.408130] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.408136] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.408141] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:43.778 [2024-11-20 05:19:40.408145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:43.779 [2024-11-20 05:19:40.408150] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:43.779 [2024-11-20 05:19:40.408154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:43.779 [2024-11-20 05:19:40.408158] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:43.779 [2024-11-20 05:19:40.408169] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.779 [2024-11-20 05:19:40.408180] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.779 [2024-11-20 05:19:40.408204] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408215] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408221] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.779 [2024-11-20 05:19:40.408233] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408241] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408261] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408269] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408275] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.779 [2024-11-20 05:19:40.408310] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408318] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408325] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.779 [2024-11-20 05:19:40.408353] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408362] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408370] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408383] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408395] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408407] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408421] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408435] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408455] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408465] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408470] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408479] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x419e8d07 00:21:43.779 [2024-11-20 05:19:40.408485] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.779 [2024-11-20 05:19:40.408490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:43.779 [2024-11-20 05:19:40.408497] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x419e8d07 00:21:43.779 ===================================================== 00:21:43.779 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.779 ===================================================== 00:21:43.779 Controller Capabilities/Features 00:21:43.779 ================================ 00:21:43.779 Vendor ID: 8086 00:21:43.779 Subsystem Vendor ID: 8086 00:21:43.779 Serial Number: SPDK00000000000001 00:21:43.779 Model Number: SPDK bdev Controller 00:21:43.779 Firmware Version: 24.01.1 00:21:43.779 Recommended Arb Burst: 6 00:21:43.779 IEEE OUI Identifier: e4 d2 5c 00:21:43.779 Multi-path I/O 00:21:43.779 May have multiple subsystem ports: Yes 00:21:43.779 May have multiple controllers: Yes 00:21:43.779 Associated with SR-IOV VF: No 00:21:43.779 Max Data Transfer Size: 131072 00:21:43.779 Max Number of Namespaces: 32 00:21:43.779 Max Number of I/O Queues: 127 00:21:43.779 NVMe Specification Version (VS): 1.3 00:21:43.779 NVMe Specification Version (Identify): 1.3 00:21:43.779 Maximum Queue Entries: 128 00:21:43.779 Contiguous Queues Required: Yes 00:21:43.779 Arbitration Mechanisms Supported 00:21:43.779 Weighted Round Robin: Not Supported 00:21:43.779 Vendor Specific: Not Supported 00:21:43.779 Reset Timeout: 15000 ms 00:21:43.779 Doorbell Stride: 4 bytes 00:21:43.779 NVM Subsystem Reset: Not Supported 00:21:43.779 Command Sets Supported 00:21:43.779 NVM Command Set: Supported 00:21:43.779 Boot Partition: Not Supported 00:21:43.779 Memory Page Size Minimum: 4096 bytes 00:21:43.779 Memory Page Size Maximum: 4096 bytes 00:21:43.779 Persistent Memory Region: Not Supported 00:21:43.779 Optional Asynchronous Events Supported 00:21:43.779 Namespace Attribute Notices: Supported 00:21:43.779 Firmware Activation Notices: Not Supported 00:21:43.779 ANA Change Notices: Not Supported 00:21:43.779 PLE Aggregate Log Change Notices: Not Supported 00:21:43.779 LBA Status Info Alert Notices: Not Supported 00:21:43.779 EGE Aggregate Log Change Notices: Not Supported 00:21:43.779 Normal NVM Subsystem Shutdown event: Not Supported 00:21:43.779 Zone Descriptor Change Notices: Not Supported 00:21:43.779 Discovery Log Change Notices: Not Supported 00:21:43.779 Controller Attributes 00:21:43.779 128-bit Host Identifier: Supported 00:21:43.779 Non-Operational Permissive Mode: Not Supported 00:21:43.779 NVM Sets: Not Supported 00:21:43.779 Read Recovery Levels: Not Supported 00:21:43.779 Endurance Groups: Not Supported 00:21:43.779 Predictable Latency Mode: Not Supported 00:21:43.779 Traffic Based Keep ALive: Not Supported 00:21:43.779 Namespace Granularity: Not Supported 00:21:43.779 SQ Associations: Not Supported 00:21:43.779 UUID List: Not Supported 00:21:43.779 Multi-Domain Subsystem: Not Supported 00:21:43.779 Fixed Capacity Management: Not Supported 00:21:43.779 Variable Capacity Management: Not Supported 00:21:43.779 Delete Endurance Group: Not Supported 00:21:43.779 Delete NVM Set: Not Supported 00:21:43.779 Extended LBA Formats Supported: Not Supported 00:21:43.779 Flexible Data Placement Supported: Not Supported 00:21:43.779 00:21:43.779 Controller Memory Buffer Support 00:21:43.779 ================================ 00:21:43.779 Supported: No 00:21:43.779 00:21:43.779 Persistent Memory Region Support 00:21:43.779 ================================ 00:21:43.779 Supported: No 00:21:43.779 00:21:43.779 Admin Command Set Attributes 00:21:43.779 ============================ 00:21:43.779 Security Send/Receive: Not Supported 00:21:43.779 Format NVM: Not Supported 00:21:43.779 Firmware Activate/Download: Not Supported 00:21:43.779 Namespace Management: Not Supported 00:21:43.779 Device Self-Test: Not Supported 00:21:43.779 Directives: Not Supported 00:21:43.779 NVMe-MI: Not Supported 00:21:43.779 Virtualization Management: Not Supported 00:21:43.779 Doorbell Buffer Config: Not Supported 00:21:43.779 Get LBA Status Capability: Not Supported 00:21:43.779 Command & Feature Lockdown Capability: Not Supported 00:21:43.779 Abort Command Limit: 4 00:21:43.779 Async Event Request Limit: 4 00:21:43.779 Number of Firmware Slots: N/A 00:21:43.779 Firmware Slot 1 Read-Only: N/A 00:21:43.779 Firmware Activation Without Reset: N/A 00:21:43.779 Multiple Update Detection Support: N/A 00:21:43.779 Firmware Update Granularity: No Information Provided 00:21:43.779 Per-Namespace SMART Log: No 00:21:43.779 Asymmetric Namespace Access Log Page: Not Supported 00:21:43.779 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:43.779 Command Effects Log Page: Supported 00:21:43.779 Get Log Page Extended Data: Supported 00:21:43.779 Telemetry Log Pages: Not Supported 00:21:43.779 Persistent Event Log Pages: Not Supported 00:21:43.779 Supported Log Pages Log Page: May Support 00:21:43.779 Commands Supported & Effects Log Page: Not Supported 00:21:43.779 Feature Identifiers & Effects Log Page:May Support 00:21:43.779 NVMe-MI Commands & Effects Log Page: May Support 00:21:43.779 Data Area 4 for Telemetry Log: Not Supported 00:21:43.779 Error Log Page Entries Supported: 128 00:21:43.779 Keep Alive: Supported 00:21:43.779 Keep Alive Granularity: 10000 ms 00:21:43.779 00:21:43.779 NVM Command Set Attributes 00:21:43.779 ========================== 00:21:43.779 Submission Queue Entry Size 00:21:43.779 Max: 64 00:21:43.780 Min: 64 00:21:43.780 Completion Queue Entry Size 00:21:43.780 Max: 16 00:21:43.780 Min: 16 00:21:43.780 Number of Namespaces: 32 00:21:43.780 Compare Command: Supported 00:21:43.780 Write Uncorrectable Command: Not Supported 00:21:43.780 Dataset Management Command: Supported 00:21:43.780 Write Zeroes Command: Supported 00:21:43.780 Set Features Save Field: Not Supported 00:21:43.780 Reservations: Supported 00:21:43.780 Timestamp: Not Supported 00:21:43.780 Copy: Supported 00:21:43.780 Volatile Write Cache: Present 00:21:43.780 Atomic Write Unit (Normal): 1 00:21:43.780 Atomic Write Unit (PFail): 1 00:21:43.780 Atomic Compare & Write Unit: 1 00:21:43.780 Fused Compare & Write: Supported 00:21:43.780 Scatter-Gather List 00:21:43.780 SGL Command Set: Supported 00:21:43.780 SGL Keyed: Supported 00:21:43.780 SGL Bit Bucket Descriptor: Not Supported 00:21:43.780 SGL Metadata Pointer: Not Supported 00:21:43.780 Oversized SGL: Not Supported 00:21:43.780 SGL Metadata Address: Not Supported 00:21:43.780 SGL Offset: Supported 00:21:43.780 Transport SGL Data Block: Not Supported 00:21:43.780 Replay Protected Memory Block: Not Supported 00:21:43.780 00:21:43.780 Firmware Slot Information 00:21:43.780 ========================= 00:21:43.780 Active slot: 1 00:21:43.780 Slot 1 Firmware Revision: 24.01.1 00:21:43.780 00:21:43.780 00:21:43.780 Commands Supported and Effects 00:21:43.780 ============================== 00:21:43.780 Admin Commands 00:21:43.780 -------------- 00:21:43.780 Get Log Page (02h): Supported 00:21:43.780 Identify (06h): Supported 00:21:43.780 Abort (08h): Supported 00:21:43.780 Set Features (09h): Supported 00:21:43.780 Get Features (0Ah): Supported 00:21:43.780 Asynchronous Event Request (0Ch): Supported 00:21:43.780 Keep Alive (18h): Supported 00:21:43.780 I/O Commands 00:21:43.780 ------------ 00:21:43.780 Flush (00h): Supported LBA-Change 00:21:43.780 Write (01h): Supported LBA-Change 00:21:43.780 Read (02h): Supported 00:21:43.780 Compare (05h): Supported 00:21:43.780 Write Zeroes (08h): Supported LBA-Change 00:21:43.780 Dataset Management (09h): Supported LBA-Change 00:21:43.780 Copy (19h): Supported LBA-Change 00:21:43.780 Unknown (79h): Supported LBA-Change 00:21:43.780 Unknown (7Ah): Supported 00:21:43.780 00:21:43.780 Error Log 00:21:43.780 ========= 00:21:43.780 00:21:43.780 Arbitration 00:21:43.780 =========== 00:21:43.780 Arbitration Burst: 1 00:21:43.780 00:21:43.780 Power Management 00:21:43.780 ================ 00:21:43.780 Number of Power States: 1 00:21:43.780 Current Power State: Power State #0 00:21:43.780 Power State #0: 00:21:43.780 Max Power: 0.00 W 00:21:43.780 Non-Operational State: Operational 00:21:43.780 Entry Latency: Not Reported 00:21:43.780 Exit Latency: Not Reported 00:21:43.780 Relative Read Throughput: 0 00:21:43.780 Relative Read Latency: 0 00:21:43.780 Relative Write Throughput: 0 00:21:43.780 Relative Write Latency: 0 00:21:43.780 Idle Power: Not Reported 00:21:43.780 Active Power: Not Reported 00:21:43.780 Non-Operational Permissive Mode: Not Supported 00:21:43.780 00:21:43.780 Health Information 00:21:43.780 ================== 00:21:43.780 Critical Warnings: 00:21:43.780 Available Spare Space: OK 00:21:43.780 Temperature: OK 00:21:43.780 Device Reliability: OK 00:21:43.780 Read Only: No 00:21:43.780 Volatile Memory Backup: OK 00:21:43.780 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:43.780 Temperature Threshol[2024-11-20 05:19:40.408575] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.408607] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.408612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408616] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408636] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:43.780 [2024-11-20 05:19:40.408643] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50578 doesn't match qid 00:21:43.780 [2024-11-20 05:19:40.408654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32678 cdw0:5 sqhd:2e28 p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408658] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50578 doesn't match qid 00:21:43.780 [2024-11-20 05:19:40.408664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32678 cdw0:5 sqhd:2e28 p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408669] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50578 doesn't match qid 00:21:43.780 [2024-11-20 05:19:40.408674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32678 cdw0:5 sqhd:2e28 p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408679] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50578 doesn't match qid 00:21:43.780 [2024-11-20 05:19:40.408685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32678 cdw0:5 sqhd:2e28 p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408691] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.408724] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.408728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408736] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.408747] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408780] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.408784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408789] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:43.780 [2024-11-20 05:19:40.408793] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:43.780 [2024-11-20 05:19:40.408797] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408804] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.408837] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.408842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408847] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408854] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.408883] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.408887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408892] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408898] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.408930] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.408935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408939] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408946] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.408981] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.408985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.408990] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.408997] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.409004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.409028] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.409032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.409037] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.409043] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.409055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.780 [2024-11-20 05:19:40.409082] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.780 [2024-11-20 05:19:40.409087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:43.780 [2024-11-20 05:19:40.409091] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x419e8d07 00:21:43.780 [2024-11-20 05:19:40.409098] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409128] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409138] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409144] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409171] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409180] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409187] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409222] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409230] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409237] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409267] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409276] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409285] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409320] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409328] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409335] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409370] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409379] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409386] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409416] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409424] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409431] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409459] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409468] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409475] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409507] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409515] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409522] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409556] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409564] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409572] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409606] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409615] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409622] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409654] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409662] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409669] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409706] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409715] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409721] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409761] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409770] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409776] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409808] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409817] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409823] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409855] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:21:43.781 [2024-11-20 05:19:40.409865] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409872] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.781 [2024-11-20 05:19:40.409878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.781 [2024-11-20 05:19:40.409906] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.781 [2024-11-20 05:19:40.409910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.409914] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.409921] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.409927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.409951] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.409956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.409960] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.409967] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.409973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.409995] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.409999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410004] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410011] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410039] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410055] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410062] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410099] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410108] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410115] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410149] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410160] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410166] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410198] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410207] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410214] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410244] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410253] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410260] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410298] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410307] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410314] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410342] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410351] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410358] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410388] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410397] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410403] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410432] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410440] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410447] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410480] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410489] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410496] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410535] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410544] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410551] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410579] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410588] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410595] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410625] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410634] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410641] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410677] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410686] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410693] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410724] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410733] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410740] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410772] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:43.782 [2024-11-20 05:19:40.410780] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410787] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.782 [2024-11-20 05:19:40.410793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.782 [2024-11-20 05:19:40.410817] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.782 [2024-11-20 05:19:40.410822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:43.783 [2024-11-20 05:19:40.410826] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.410833] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.410839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.783 [2024-11-20 05:19:40.410863] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.783 [2024-11-20 05:19:40.410867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:43.783 [2024-11-20 05:19:40.410872] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.410879] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.410884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.783 [2024-11-20 05:19:40.410912] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.783 [2024-11-20 05:19:40.410916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:21:43.783 [2024-11-20 05:19:40.410920] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.410927] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.410933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.783 [2024-11-20 05:19:40.410960] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.783 [2024-11-20 05:19:40.410965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:43.783 [2024-11-20 05:19:40.410969] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.410976] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.410983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.783 [2024-11-20 05:19:40.411009] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.783 [2024-11-20 05:19:40.411013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:21:43.783 [2024-11-20 05:19:40.411018] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.411024] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.411030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.783 [2024-11-20 05:19:40.415053] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.783 [2024-11-20 05:19:40.415059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:21:43.783 [2024-11-20 05:19:40.415064] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.415071] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.415077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:43.783 [2024-11-20 05:19:40.415105] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:43.783 [2024-11-20 05:19:40.415109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:21:43.783 [2024-11-20 05:19:40.415114] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x419e8d07 00:21:43.783 [2024-11-20 05:19:40.415119] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:21:43.783 d: 0 Kelvin (-273 Celsius) 00:21:43.783 Available Spare: 0% 00:21:43.783 Available Spare Threshold: 0% 00:21:43.783 Life Percentage Used: 0% 00:21:43.783 Data Units Read: 0 00:21:43.783 Data Units Written: 0 00:21:43.783 Host Read Commands: 0 00:21:43.783 Host Write Commands: 0 00:21:43.783 Controller Busy Time: 0 minutes 00:21:43.783 Power Cycles: 0 00:21:43.783 Power On Hours: 0 hours 00:21:43.783 Unsafe Shutdowns: 0 00:21:43.783 Unrecoverable Media Errors: 0 00:21:43.783 Lifetime Error Log Entries: 0 00:21:43.783 Warning Temperature Time: 0 minutes 00:21:43.783 Critical Temperature Time: 0 minutes 00:21:43.783 00:21:43.783 Number of Queues 00:21:43.783 ================ 00:21:43.783 Number of I/O Submission Queues: 127 00:21:43.783 Number of I/O Completion Queues: 127 00:21:43.783 00:21:43.783 Active Namespaces 00:21:43.783 ================= 00:21:43.783 Namespace ID:1 00:21:43.783 Error Recovery Timeout: Unlimited 00:21:43.783 Command Set Identifier: NVM (00h) 00:21:43.783 Deallocate: Supported 00:21:43.783 Deallocated/Unwritten Error: Not Supported 00:21:43.783 Deallocated Read Value: Unknown 00:21:43.783 Deallocate in Write Zeroes: Not Supported 00:21:43.783 Deallocated Guard Field: 0xFFFF 00:21:43.783 Flush: Supported 00:21:43.783 Reservation: Supported 00:21:43.783 Namespace Sharing Capabilities: Multiple Controllers 00:21:43.783 Size (in LBAs): 131072 (0GiB) 00:21:43.783 Capacity (in LBAs): 131072 (0GiB) 00:21:43.783 Utilization (in LBAs): 131072 (0GiB) 00:21:43.783 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:43.783 EUI64: ABCDEF0123456789 00:21:43.783 UUID: e1bb8d64-ba60-4202-b384-9afd18454dbc 00:21:43.783 Thin Provisioning: Not Supported 00:21:43.783 Per-NS Atomic Units: Yes 00:21:43.783 Atomic Boundary Size (Normal): 0 00:21:43.783 Atomic Boundary Size (PFail): 0 00:21:43.783 Atomic Boundary Offset: 0 00:21:43.783 Maximum Single Source Range Length: 65535 00:21:43.783 Maximum Copy Length: 65535 00:21:43.783 Maximum Source Range Count: 1 00:21:43.783 NGUID/EUI64 Never Reused: No 00:21:43.783 Namespace Write Protected: No 00:21:43.783 Number of LBA Formats: 1 00:21:43.783 Current LBA Format: LBA Format #00 00:21:43.783 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:43.783 00:21:43.783 05:19:40 -- host/identify.sh@51 -- # sync 00:21:43.783 05:19:40 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.783 05:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.783 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.783 05:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.783 05:19:40 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:43.783 05:19:40 -- host/identify.sh@56 -- # nvmftestfini 00:21:43.783 05:19:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:43.783 05:19:40 -- nvmf/common.sh@116 -- # sync 00:21:43.783 05:19:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:43.783 05:19:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:43.783 05:19:40 -- nvmf/common.sh@119 -- # set +e 00:21:43.783 05:19:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:43.783 05:19:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:43.783 rmmod nvme_rdma 00:21:43.783 rmmod nvme_fabrics 00:21:43.783 05:19:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:43.783 05:19:40 -- nvmf/common.sh@123 -- # set -e 00:21:43.783 05:19:40 -- nvmf/common.sh@124 -- # return 0 00:21:43.783 05:19:40 -- nvmf/common.sh@477 -- # '[' -n 346487 ']' 00:21:43.783 05:19:40 -- nvmf/common.sh@478 -- # killprocess 346487 00:21:43.783 05:19:40 -- common/autotest_common.sh@936 -- # '[' -z 346487 ']' 00:21:43.783 05:19:40 -- common/autotest_common.sh@940 -- # kill -0 346487 00:21:43.783 05:19:40 -- common/autotest_common.sh@941 -- # uname 00:21:43.783 05:19:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.783 05:19:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 346487 00:21:43.783 05:19:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:43.783 05:19:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:43.783 05:19:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 346487' 00:21:43.783 killing process with pid 346487 00:21:43.783 05:19:40 -- common/autotest_common.sh@955 -- # kill 346487 00:21:43.783 [2024-11-20 05:19:40.560596] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:43.783 05:19:40 -- common/autotest_common.sh@960 -- # wait 346487 00:21:44.042 05:19:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:44.042 05:19:40 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:44.042 00:21:44.042 real 0m7.130s 00:21:44.042 user 0m7.589s 00:21:44.043 sys 0m4.304s 00:21:44.043 05:19:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:44.043 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:44.043 ************************************ 00:21:44.043 END TEST nvmf_identify 00:21:44.043 ************************************ 00:21:44.043 05:19:40 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:21:44.043 05:19:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:44.043 05:19:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.043 05:19:40 -- common/autotest_common.sh@10 -- # set +x 00:21:44.043 ************************************ 00:21:44.043 START TEST nvmf_perf 00:21:44.043 ************************************ 00:21:44.043 05:19:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:21:44.302 * Looking for test storage... 00:21:44.302 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:21:44.302 05:19:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:44.302 05:19:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:44.302 05:19:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:44.302 05:19:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:44.302 05:19:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:44.302 05:19:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:44.302 05:19:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:44.302 05:19:40 -- scripts/common.sh@335 -- # IFS=.-: 00:21:44.302 05:19:40 -- scripts/common.sh@335 -- # read -ra ver1 00:21:44.302 05:19:40 -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.302 05:19:40 -- scripts/common.sh@336 -- # read -ra ver2 00:21:44.302 05:19:40 -- scripts/common.sh@337 -- # local 'op=<' 00:21:44.302 05:19:40 -- scripts/common.sh@339 -- # ver1_l=2 00:21:44.302 05:19:40 -- scripts/common.sh@340 -- # ver2_l=1 00:21:44.302 05:19:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:44.302 05:19:40 -- scripts/common.sh@343 -- # case "$op" in 00:21:44.302 05:19:40 -- scripts/common.sh@344 -- # : 1 00:21:44.302 05:19:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:44.302 05:19:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.302 05:19:40 -- scripts/common.sh@364 -- # decimal 1 00:21:44.302 05:19:40 -- scripts/common.sh@352 -- # local d=1 00:21:44.302 05:19:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.302 05:19:40 -- scripts/common.sh@354 -- # echo 1 00:21:44.302 05:19:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:44.302 05:19:41 -- scripts/common.sh@365 -- # decimal 2 00:21:44.302 05:19:41 -- scripts/common.sh@352 -- # local d=2 00:21:44.302 05:19:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.302 05:19:41 -- scripts/common.sh@354 -- # echo 2 00:21:44.302 05:19:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:44.302 05:19:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:44.302 05:19:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:44.302 05:19:41 -- scripts/common.sh@367 -- # return 0 00:21:44.302 05:19:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.302 05:19:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:44.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.302 --rc genhtml_branch_coverage=1 00:21:44.302 --rc genhtml_function_coverage=1 00:21:44.302 --rc genhtml_legend=1 00:21:44.302 --rc geninfo_all_blocks=1 00:21:44.302 --rc geninfo_unexecuted_blocks=1 00:21:44.302 00:21:44.302 ' 00:21:44.302 05:19:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:44.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.302 --rc genhtml_branch_coverage=1 00:21:44.302 --rc genhtml_function_coverage=1 00:21:44.302 --rc genhtml_legend=1 00:21:44.302 --rc geninfo_all_blocks=1 00:21:44.302 --rc geninfo_unexecuted_blocks=1 00:21:44.302 00:21:44.302 ' 00:21:44.302 05:19:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:44.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.302 --rc genhtml_branch_coverage=1 00:21:44.302 --rc genhtml_function_coverage=1 00:21:44.302 --rc genhtml_legend=1 00:21:44.302 --rc geninfo_all_blocks=1 00:21:44.302 --rc geninfo_unexecuted_blocks=1 00:21:44.302 00:21:44.302 ' 00:21:44.302 05:19:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:44.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.302 --rc genhtml_branch_coverage=1 00:21:44.302 --rc genhtml_function_coverage=1 00:21:44.302 --rc genhtml_legend=1 00:21:44.302 --rc geninfo_all_blocks=1 00:21:44.302 --rc geninfo_unexecuted_blocks=1 00:21:44.302 00:21:44.302 ' 00:21:44.302 05:19:41 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.302 05:19:41 -- nvmf/common.sh@7 -- # uname -s 00:21:44.303 05:19:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.303 05:19:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.303 05:19:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.303 05:19:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.303 05:19:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.303 05:19:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.303 05:19:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.303 05:19:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.303 05:19:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.303 05:19:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.303 05:19:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:44.303 05:19:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:44.303 05:19:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.303 05:19:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.303 05:19:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:44.303 05:19:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:21:44.303 05:19:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.303 05:19:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.303 05:19:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.303 05:19:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.303 05:19:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.303 05:19:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.303 05:19:41 -- paths/export.sh@5 -- # export PATH 00:21:44.303 05:19:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.303 05:19:41 -- nvmf/common.sh@46 -- # : 0 00:21:44.303 05:19:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:44.303 05:19:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:44.303 05:19:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:44.303 05:19:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.303 05:19:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.303 05:19:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:44.303 05:19:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:44.303 05:19:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:44.303 05:19:41 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:44.303 05:19:41 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:44.303 05:19:41 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:21:44.303 05:19:41 -- host/perf.sh@17 -- # nvmftestinit 00:21:44.303 05:19:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:44.303 05:19:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.303 05:19:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:44.303 05:19:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:44.303 05:19:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:44.303 05:19:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.303 05:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.303 05:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.303 05:19:41 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:44.303 05:19:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:44.303 05:19:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:44.303 05:19:41 -- common/autotest_common.sh@10 -- # set +x 00:21:49.582 05:19:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:49.582 05:19:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:49.582 05:19:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:49.582 05:19:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:49.582 05:19:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:49.582 05:19:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:49.582 05:19:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:49.582 05:19:45 -- nvmf/common.sh@294 -- # net_devs=() 00:21:49.582 05:19:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:49.582 05:19:45 -- nvmf/common.sh@295 -- # e810=() 00:21:49.582 05:19:45 -- nvmf/common.sh@295 -- # local -ga e810 00:21:49.582 05:19:45 -- nvmf/common.sh@296 -- # x722=() 00:21:49.582 05:19:45 -- nvmf/common.sh@296 -- # local -ga x722 00:21:49.582 05:19:45 -- nvmf/common.sh@297 -- # mlx=() 00:21:49.582 05:19:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:49.582 05:19:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.582 05:19:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:49.582 05:19:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:49.582 05:19:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:49.582 05:19:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:49.582 05:19:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:49.582 05:19:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:49.582 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:49.582 05:19:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:49.582 05:19:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:49.582 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:49.582 05:19:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:49.582 05:19:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:49.582 05:19:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:21:49.582 05:19:45 -- nvmf/common.sh@376 -- # modinfo irdma 00:21:49.582 05:19:45 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:21:49.582 05:19:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.582 05:19:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:49.582 05:19:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.582 05:19:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:49.582 Found net devices under 0000:af:00.0: cvl_0_0 00:21:49.582 05:19:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.582 05:19:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.582 05:19:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:49.582 05:19:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.582 05:19:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:49.582 Found net devices under 0000:af:00.1: cvl_0_1 00:21:49.582 05:19:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.582 05:19:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:49.582 05:19:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:49.582 05:19:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:49.582 05:19:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:49.582 05:19:45 -- nvmf/common.sh@57 -- # uname 00:21:49.582 05:19:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:49.582 05:19:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:49.582 05:19:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:49.582 05:19:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:49.582 05:19:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:49.582 05:19:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:49.582 05:19:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:49.582 05:19:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:49.582 05:19:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:49.582 05:19:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:49.582 05:19:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:49.582 05:19:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:49.582 05:19:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:49.582 05:19:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:49.582 05:19:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:49.582 05:19:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:49.582 05:19:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:21:49.582 05:19:45 -- nvmf/common.sh@104 -- # continue 2 00:21:49.582 05:19:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.582 05:19:45 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:21:49.582 05:19:45 -- nvmf/common.sh@104 -- # continue 2 00:21:49.582 05:19:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:49.582 05:19:45 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:21:49.582 05:19:45 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:21:49.582 05:19:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:21:49.582 05:19:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:49.582 05:19:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:49.582 05:19:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:49.582 05:19:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:49.582 05:19:45 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:21:49.582 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:49.582 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:21:49.582 altname enp175s0f0np0 00:21:49.582 altname ens801f0np0 00:21:49.582 inet 192.168.100.8/24 scope global cvl_0_0 00:21:49.582 valid_lft forever preferred_lft forever 00:21:49.582 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:21:49.582 valid_lft forever preferred_lft forever 00:21:49.583 05:19:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:49.583 05:19:45 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:21:49.583 05:19:45 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:49.583 05:19:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:49.583 05:19:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:49.583 05:19:45 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:21:49.583 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:49.583 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:21:49.583 altname enp175s0f1np1 00:21:49.583 altname ens801f1np1 00:21:49.583 inet 192.168.100.9/24 scope global cvl_0_1 00:21:49.583 valid_lft forever preferred_lft forever 00:21:49.583 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:21:49.583 valid_lft forever preferred_lft forever 00:21:49.583 05:19:45 -- nvmf/common.sh@410 -- # return 0 00:21:49.583 05:19:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:49.583 05:19:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:49.583 05:19:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:49.583 05:19:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:49.583 05:19:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:49.583 05:19:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:49.583 05:19:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:49.583 05:19:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:49.583 05:19:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:49.583 05:19:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:49.583 05:19:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:49.583 05:19:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.583 05:19:45 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:49.583 05:19:45 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:21:49.583 05:19:45 -- nvmf/common.sh@104 -- # continue 2 00:21:49.583 05:19:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:49.583 05:19:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.583 05:19:45 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:49.583 05:19:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.583 05:19:45 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:49.583 05:19:45 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:21:49.583 05:19:45 -- nvmf/common.sh@104 -- # continue 2 00:21:49.583 05:19:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:49.583 05:19:45 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:21:49.583 05:19:45 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:49.583 05:19:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:49.583 05:19:45 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:21:49.583 05:19:45 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:49.583 05:19:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:21:49.583 05:19:45 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:49.583 192.168.100.9' 00:21:49.583 05:19:45 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:49.583 192.168.100.9' 00:21:49.583 05:19:45 -- nvmf/common.sh@445 -- # head -n 1 00:21:49.583 05:19:45 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:49.583 05:19:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:49.583 192.168.100.9' 00:21:49.583 05:19:45 -- nvmf/common.sh@446 -- # tail -n +2 00:21:49.583 05:19:45 -- nvmf/common.sh@446 -- # head -n 1 00:21:49.583 05:19:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:49.583 05:19:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:49.583 05:19:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:49.583 05:19:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:49.583 05:19:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:49.583 05:19:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:49.583 05:19:46 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:49.583 05:19:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:49.583 05:19:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.583 05:19:46 -- common/autotest_common.sh@10 -- # set +x 00:21:49.583 05:19:46 -- nvmf/common.sh@469 -- # nvmfpid=349793 00:21:49.583 05:19:46 -- nvmf/common.sh@470 -- # waitforlisten 349793 00:21:49.583 05:19:46 -- common/autotest_common.sh@829 -- # '[' -z 349793 ']' 00:21:49.583 05:19:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.583 05:19:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.583 05:19:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.583 05:19:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.583 05:19:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.583 05:19:46 -- common/autotest_common.sh@10 -- # set +x 00:21:49.583 [2024-11-20 05:19:46.074007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:49.583 [2024-11-20 05:19:46.074059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.583 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.583 [2024-11-20 05:19:46.128760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.583 [2024-11-20 05:19:46.204027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:49.583 [2024-11-20 05:19:46.204151] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.583 [2024-11-20 05:19:46.204159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.583 [2024-11-20 05:19:46.204166] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.583 [2024-11-20 05:19:46.204201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.583 [2024-11-20 05:19:46.204311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.583 [2024-11-20 05:19:46.204373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.583 [2024-11-20 05:19:46.204374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.151 05:19:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.151 05:19:46 -- common/autotest_common.sh@862 -- # return 0 00:21:50.151 05:19:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:50.151 05:19:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.151 05:19:46 -- common/autotest_common.sh@10 -- # set +x 00:21:50.151 05:19:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.151 05:19:46 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:50.151 05:19:46 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:53.440 05:19:49 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:53.440 05:19:49 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:53.440 05:19:50 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:53.440 05:19:50 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:53.700 05:19:50 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:53.700 05:19:50 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:53.700 05:19:50 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:53.700 05:19:50 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:21:53.700 05:19:50 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:21:53.700 [2024-11-20 05:19:50.506656] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:21:53.700 [2024-11-20 05:19:50.519528] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xfd8740/0xfd7d80) succeed. 00:21:53.959 [2024-11-20 05:19:50.528538] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xfd9ab0/0xfd8300) succeed. 00:21:53.959 [2024-11-20 05:19:50.528558] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:21:53.959 05:19:50 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.959 05:19:50 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:53.959 05:19:50 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.219 05:19:50 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:54.219 05:19:50 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:54.478 05:19:51 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:54.478 [2024-11-20 05:19:51.299723] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:54.738 05:19:51 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:54.738 05:19:51 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:54.738 05:19:51 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:54.738 05:19:51 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:54.738 05:19:51 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:56.117 Initializing NVMe Controllers 00:21:56.117 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:56.117 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:56.117 Initialization complete. Launching workers. 00:21:56.117 ======================================================== 00:21:56.117 Latency(us) 00:21:56.117 Device Information : IOPS MiB/s Average min max 00:21:56.117 PCIE (0000:5e:00.0) NSID 1 from core 0: 100186.46 391.35 318.94 23.88 4374.73 00:21:56.117 ======================================================== 00:21:56.117 Total : 100186.46 391.35 318.94 23.88 4374.73 00:21:56.117 00:21:56.117 05:19:52 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:56.117 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.408 Initializing NVMe Controllers 00:21:59.408 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:59.408 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:59.408 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:59.408 Initialization complete. Launching workers. 00:21:59.408 ======================================================== 00:21:59.408 Latency(us) 00:21:59.408 Device Information : IOPS MiB/s Average min max 00:21:59.408 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6001.99 23.45 166.40 55.64 4094.55 00:21:59.408 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4893.99 19.12 202.50 78.46 4118.34 00:21:59.408 ======================================================== 00:21:59.408 Total : 10895.98 42.56 182.62 55.64 4118.34 00:21:59.408 00:21:59.408 05:19:56 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:59.408 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.698 Initializing NVMe Controllers 00:22:02.698 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.698 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:02.698 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:02.698 Initialization complete. Launching workers. 00:22:02.698 ======================================================== 00:22:02.698 Latency(us) 00:22:02.698 Device Information : IOPS MiB/s Average min max 00:22:02.698 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19123.98 74.70 1673.68 422.18 8946.95 00:22:02.698 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7987.07 5756.92 12021.81 00:22:02.698 ======================================================== 00:22:02.698 Total : 23155.98 90.45 2772.99 422.18 12021.81 00:22:02.698 00:22:02.698 05:19:59 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:02.698 05:19:59 -- host/perf.sh@59 -- # [[ rdma == \r\d\m\a ]] 00:22:02.698 05:19:59 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:22:02.698 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.957 No valid NVMe controllers or AIO or URING devices found 00:22:02.957 Initializing NVMe Controllers 00:22:02.957 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.957 Controller IO queue size 128, less than required. 00:22:02.957 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:02.957 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:02.957 Controller IO queue size 128, less than required. 00:22:02.957 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:02.957 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:02.957 WARNING: Some requested NVMe devices were skipped 00:22:02.957 05:19:59 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:22:02.957 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.233 Initializing NVMe Controllers 00:22:08.233 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.233 Controller IO queue size 128, less than required. 00:22:08.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:08.233 Controller IO queue size 128, less than required. 00:22:08.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:08.233 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:08.233 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:08.233 Initialization complete. Launching workers. 00:22:08.233 00:22:08.233 ==================== 00:22:08.233 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:08.233 RDMA transport: 00:22:08.233 dev name: rocep175s0f0 00:22:08.233 polls: 368388 00:22:08.233 idle_polls: 362733 00:22:08.233 completions: 41622 00:22:08.233 queued_requests: 1 00:22:08.233 total_send_wrs: 20900 00:22:08.233 send_doorbell_updates: 5089 00:22:08.233 total_recv_wrs: 20900 00:22:08.233 recv_doorbell_updates: 5090 00:22:08.233 --------------------------------- 00:22:08.233 00:22:08.233 ==================== 00:22:08.233 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:08.233 RDMA transport: 00:22:08.233 dev name: rocep175s0f0 00:22:08.233 polls: 367308 00:22:08.233 idle_polls: 360059 00:22:08.233 completions: 48205 00:22:08.233 queued_requests: 1 00:22:08.233 total_send_wrs: 24190 00:22:08.233 send_doorbell_updates: 6422 00:22:08.233 total_recv_wrs: 24190 00:22:08.233 recv_doorbell_updates: 6423 00:22:08.233 --------------------------------- 00:22:08.233 ======================================================== 00:22:08.233 Latency(us) 00:22:08.233 Device Information : IOPS MiB/s Average min max 00:22:08.233 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5250.17 1312.54 24470.13 16791.65 52483.98 00:22:08.233 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6071.60 1517.90 20905.68 15393.28 38776.57 00:22:08.233 ======================================================== 00:22:08.233 Total : 11321.77 2830.44 22558.60 15393.28 52483.98 00:22:08.233 00:22:08.233 05:20:04 -- host/perf.sh@66 -- # sync 00:22:08.233 05:20:04 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.233 05:20:04 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:22:08.233 05:20:04 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:22:08.233 05:20:04 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:22:10.770 05:20:07 -- host/perf.sh@72 -- # ls_guid=e5a92b71-b3c8-442e-af15-e8a8038215d9 00:22:10.770 05:20:07 -- host/perf.sh@73 -- # get_lvs_free_mb e5a92b71-b3c8-442e-af15-e8a8038215d9 00:22:10.770 05:20:07 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e5a92b71-b3c8-442e-af15-e8a8038215d9 00:22:10.770 05:20:07 -- common/autotest_common.sh@1354 -- # local lvs_info 00:22:10.770 05:20:07 -- common/autotest_common.sh@1355 -- # local fc 00:22:10.770 05:20:07 -- common/autotest_common.sh@1356 -- # local cs 00:22:10.770 05:20:07 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:11.029 05:20:07 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:22:11.029 { 00:22:11.029 "uuid": "e5a92b71-b3c8-442e-af15-e8a8038215d9", 00:22:11.029 "name": "lvs_0", 00:22:11.029 "base_bdev": "Nvme0n1", 00:22:11.029 "total_data_clusters": 238234, 00:22:11.029 "free_clusters": 238234, 00:22:11.029 "block_size": 512, 00:22:11.029 "cluster_size": 4194304 00:22:11.029 } 00:22:11.029 ]' 00:22:11.029 05:20:07 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e5a92b71-b3c8-442e-af15-e8a8038215d9") .free_clusters' 00:22:11.029 05:20:07 -- common/autotest_common.sh@1358 -- # fc=238234 00:22:11.029 05:20:07 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e5a92b71-b3c8-442e-af15-e8a8038215d9") .cluster_size' 00:22:11.029 05:20:07 -- common/autotest_common.sh@1359 -- # cs=4194304 00:22:11.029 05:20:07 -- common/autotest_common.sh@1362 -- # free_mb=952936 00:22:11.029 05:20:07 -- common/autotest_common.sh@1363 -- # echo 952936 00:22:11.029 952936 00:22:11.029 05:20:07 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:22:11.029 05:20:07 -- host/perf.sh@78 -- # free_mb=20480 00:22:11.029 05:20:07 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e5a92b71-b3c8-442e-af15-e8a8038215d9 lbd_0 20480 00:22:11.597 05:20:08 -- host/perf.sh@80 -- # lb_guid=3a846783-a3d4-454d-a21f-dd723ebb9e3c 00:22:11.597 05:20:08 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3a846783-a3d4-454d-a21f-dd723ebb9e3c lvs_n_0 00:22:12.165 05:20:08 -- host/perf.sh@83 -- # ls_nested_guid=bead6179-0e79-4308-a20f-030b8ab49f0b 00:22:12.165 05:20:08 -- host/perf.sh@84 -- # get_lvs_free_mb bead6179-0e79-4308-a20f-030b8ab49f0b 00:22:12.165 05:20:08 -- common/autotest_common.sh@1353 -- # local lvs_uuid=bead6179-0e79-4308-a20f-030b8ab49f0b 00:22:12.165 05:20:08 -- common/autotest_common.sh@1354 -- # local lvs_info 00:22:12.165 05:20:08 -- common/autotest_common.sh@1355 -- # local fc 00:22:12.165 05:20:08 -- common/autotest_common.sh@1356 -- # local cs 00:22:12.165 05:20:08 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:12.424 05:20:09 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:22:12.424 { 00:22:12.424 "uuid": "e5a92b71-b3c8-442e-af15-e8a8038215d9", 00:22:12.424 "name": "lvs_0", 00:22:12.424 "base_bdev": "Nvme0n1", 00:22:12.424 "total_data_clusters": 238234, 00:22:12.424 "free_clusters": 233114, 00:22:12.424 "block_size": 512, 00:22:12.424 "cluster_size": 4194304 00:22:12.424 }, 00:22:12.424 { 00:22:12.424 "uuid": "bead6179-0e79-4308-a20f-030b8ab49f0b", 00:22:12.424 "name": "lvs_n_0", 00:22:12.424 "base_bdev": "3a846783-a3d4-454d-a21f-dd723ebb9e3c", 00:22:12.424 "total_data_clusters": 5114, 00:22:12.424 "free_clusters": 5114, 00:22:12.424 "block_size": 512, 00:22:12.424 "cluster_size": 4194304 00:22:12.424 } 00:22:12.424 ]' 00:22:12.424 05:20:09 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="bead6179-0e79-4308-a20f-030b8ab49f0b") .free_clusters' 00:22:12.424 05:20:09 -- common/autotest_common.sh@1358 -- # fc=5114 00:22:12.424 05:20:09 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="bead6179-0e79-4308-a20f-030b8ab49f0b") .cluster_size' 00:22:12.424 05:20:09 -- common/autotest_common.sh@1359 -- # cs=4194304 00:22:12.424 05:20:09 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:22:12.424 05:20:09 -- common/autotest_common.sh@1363 -- # echo 20456 00:22:12.424 20456 00:22:12.424 05:20:09 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:22:12.424 05:20:09 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bead6179-0e79-4308-a20f-030b8ab49f0b lbd_nest_0 20456 00:22:12.682 05:20:09 -- host/perf.sh@88 -- # lb_nested_guid=a9831afa-6fc3-47a0-9f88-fd4f787ecd9f 00:22:12.682 05:20:09 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.941 05:20:09 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:22:12.941 05:20:09 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a9831afa-6fc3-47a0-9f88-fd4f787ecd9f 00:22:13.199 05:20:09 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:13.199 05:20:09 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:22:13.199 05:20:09 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:22:13.199 05:20:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:13.199 05:20:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:13.200 05:20:09 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:13.200 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.406 Initializing NVMe Controllers 00:22:25.406 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.406 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:25.406 Initialization complete. Launching workers. 00:22:25.407 ======================================================== 00:22:25.407 Latency(us) 00:22:25.407 Device Information : IOPS MiB/s Average min max 00:22:25.407 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5312.20 2.59 187.66 74.08 7029.94 00:22:25.407 ======================================================== 00:22:25.407 Total : 5312.20 2.59 187.66 74.08 7029.94 00:22:25.407 00:22:25.407 05:20:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:25.407 05:20:21 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:25.407 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.612 Initializing NVMe Controllers 00:22:37.612 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:37.612 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:37.612 Initialization complete. Launching workers. 00:22:37.612 ======================================================== 00:22:37.612 Latency(us) 00:22:37.612 Device Information : IOPS MiB/s Average min max 00:22:37.612 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 125.30 15.66 7985.01 4987.46 11973.83 00:22:37.612 ======================================================== 00:22:37.612 Total : 125.30 15.66 7985.01 4987.46 11973.83 00:22:37.612 00:22:37.612 05:20:32 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:37.612 05:20:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:37.612 05:20:32 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:37.612 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.591 Initializing NVMe Controllers 00:22:47.591 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.591 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.591 Initialization complete. Launching workers. 00:22:47.591 ======================================================== 00:22:47.591 Latency(us) 00:22:47.591 Device Information : IOPS MiB/s Average min max 00:22:47.591 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11796.39 5.76 2712.72 826.30 9276.22 00:22:47.591 ======================================================== 00:22:47.591 Total : 11796.39 5.76 2712.72 826.30 9276.22 00:22:47.591 00:22:47.591 05:20:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:47.591 05:20:43 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:47.591 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.800 Initializing NVMe Controllers 00:22:59.800 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.800 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.800 Initialization complete. Launching workers. 00:22:59.800 ======================================================== 00:22:59.800 Latency(us) 00:22:59.800 Device Information : IOPS MiB/s Average min max 00:22:59.800 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9193.42 1149.18 3481.03 600.18 7529.74 00:22:59.800 ======================================================== 00:22:59.800 Total : 9193.42 1149.18 3481.03 600.18 7529.74 00:22:59.800 00:22:59.800 05:20:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:59.800 05:20:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:59.800 05:20:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:59.800 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.778 Initializing NVMe Controllers 00:23:09.778 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.778 Controller IO queue size 128, less than required. 00:23:09.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:09.778 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:09.778 Initialization complete. Launching workers. 00:23:09.778 ======================================================== 00:23:09.778 Latency(us) 00:23:09.778 Device Information : IOPS MiB/s Average min max 00:23:09.778 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19733.21 9.64 6489.29 1710.32 15662.49 00:23:09.778 ======================================================== 00:23:09.778 Total : 19733.21 9.64 6489.29 1710.32 15662.49 00:23:09.778 00:23:09.778 05:21:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:09.778 05:21:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:09.778 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.988 Initializing NVMe Controllers 00:23:21.988 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.988 Controller IO queue size 128, less than required. 00:23:21.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.988 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:21.988 Initialization complete. Launching workers. 00:23:21.988 ======================================================== 00:23:21.988 Latency(us) 00:23:21.988 Device Information : IOPS MiB/s Average min max 00:23:21.988 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6488.95 811.12 19727.21 7876.82 31904.14 00:23:21.988 ======================================================== 00:23:21.988 Total : 6488.95 811.12 19727.21 7876.82 31904.14 00:23:21.988 00:23:21.988 05:21:17 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.988 05:21:18 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9831afa-6fc3-47a0-9f88-fd4f787ecd9f 00:23:21.988 05:21:18 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:22.247 05:21:18 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3a846783-a3d4-454d-a21f-dd723ebb9e3c 00:23:22.505 05:21:19 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:22.763 05:21:19 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:22.763 05:21:19 -- host/perf.sh@114 -- # nvmftestfini 00:23:22.763 05:21:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:22.764 05:21:19 -- nvmf/common.sh@116 -- # sync 00:23:22.764 05:21:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:22.764 05:21:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:22.764 05:21:19 -- nvmf/common.sh@119 -- # set +e 00:23:22.764 05:21:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:22.764 05:21:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:22.764 rmmod nvme_rdma 00:23:22.764 rmmod nvme_fabrics 00:23:22.764 05:21:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:22.764 05:21:19 -- nvmf/common.sh@123 -- # set -e 00:23:22.764 05:21:19 -- nvmf/common.sh@124 -- # return 0 00:23:22.764 05:21:19 -- nvmf/common.sh@477 -- # '[' -n 349793 ']' 00:23:22.764 05:21:19 -- nvmf/common.sh@478 -- # killprocess 349793 00:23:22.764 05:21:19 -- common/autotest_common.sh@936 -- # '[' -z 349793 ']' 00:23:22.764 05:21:19 -- common/autotest_common.sh@940 -- # kill -0 349793 00:23:22.764 05:21:19 -- common/autotest_common.sh@941 -- # uname 00:23:22.764 05:21:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:22.764 05:21:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 349793 00:23:22.764 05:21:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:22.764 05:21:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:22.764 05:21:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 349793' 00:23:22.764 killing process with pid 349793 00:23:22.764 05:21:19 -- common/autotest_common.sh@955 -- # kill 349793 00:23:22.764 05:21:19 -- common/autotest_common.sh@960 -- # wait 349793 00:23:24.668 05:21:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:24.668 05:21:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:24.668 00:23:24.668 real 1m40.210s 00:23:24.668 user 6m22.871s 00:23:24.668 sys 0m5.370s 00:23:24.668 05:21:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:24.668 05:21:21 -- common/autotest_common.sh@10 -- # set +x 00:23:24.668 ************************************ 00:23:24.668 END TEST nvmf_perf 00:23:24.668 ************************************ 00:23:24.668 05:21:21 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:24.668 05:21:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:24.668 05:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:24.668 05:21:21 -- common/autotest_common.sh@10 -- # set +x 00:23:24.668 ************************************ 00:23:24.668 START TEST nvmf_fio_host 00:23:24.668 ************************************ 00:23:24.668 05:21:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:24.668 * Looking for test storage... 00:23:24.668 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:23:24.668 05:21:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:24.668 05:21:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:24.668 05:21:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:24.668 05:21:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:24.668 05:21:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:24.668 05:21:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:24.668 05:21:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:24.668 05:21:21 -- scripts/common.sh@335 -- # IFS=.-: 00:23:24.668 05:21:21 -- scripts/common.sh@335 -- # read -ra ver1 00:23:24.668 05:21:21 -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.668 05:21:21 -- scripts/common.sh@336 -- # read -ra ver2 00:23:24.668 05:21:21 -- scripts/common.sh@337 -- # local 'op=<' 00:23:24.668 05:21:21 -- scripts/common.sh@339 -- # ver1_l=2 00:23:24.668 05:21:21 -- scripts/common.sh@340 -- # ver2_l=1 00:23:24.668 05:21:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:24.668 05:21:21 -- scripts/common.sh@343 -- # case "$op" in 00:23:24.668 05:21:21 -- scripts/common.sh@344 -- # : 1 00:23:24.668 05:21:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:24.668 05:21:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.668 05:21:21 -- scripts/common.sh@364 -- # decimal 1 00:23:24.668 05:21:21 -- scripts/common.sh@352 -- # local d=1 00:23:24.668 05:21:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.668 05:21:21 -- scripts/common.sh@354 -- # echo 1 00:23:24.668 05:21:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:24.669 05:21:21 -- scripts/common.sh@365 -- # decimal 2 00:23:24.669 05:21:21 -- scripts/common.sh@352 -- # local d=2 00:23:24.669 05:21:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.669 05:21:21 -- scripts/common.sh@354 -- # echo 2 00:23:24.669 05:21:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:24.669 05:21:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:24.669 05:21:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:24.669 05:21:21 -- scripts/common.sh@367 -- # return 0 00:23:24.669 05:21:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.669 05:21:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:24.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.669 --rc genhtml_branch_coverage=1 00:23:24.669 --rc genhtml_function_coverage=1 00:23:24.669 --rc genhtml_legend=1 00:23:24.669 --rc geninfo_all_blocks=1 00:23:24.669 --rc geninfo_unexecuted_blocks=1 00:23:24.669 00:23:24.669 ' 00:23:24.669 05:21:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:24.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.669 --rc genhtml_branch_coverage=1 00:23:24.669 --rc genhtml_function_coverage=1 00:23:24.669 --rc genhtml_legend=1 00:23:24.669 --rc geninfo_all_blocks=1 00:23:24.669 --rc geninfo_unexecuted_blocks=1 00:23:24.669 00:23:24.669 ' 00:23:24.669 05:21:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:24.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.669 --rc genhtml_branch_coverage=1 00:23:24.669 --rc genhtml_function_coverage=1 00:23:24.669 --rc genhtml_legend=1 00:23:24.669 --rc geninfo_all_blocks=1 00:23:24.669 --rc geninfo_unexecuted_blocks=1 00:23:24.669 00:23:24.669 ' 00:23:24.669 05:21:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:24.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.669 --rc genhtml_branch_coverage=1 00:23:24.669 --rc genhtml_function_coverage=1 00:23:24.669 --rc genhtml_legend=1 00:23:24.669 --rc geninfo_all_blocks=1 00:23:24.669 --rc geninfo_unexecuted_blocks=1 00:23:24.669 00:23:24.669 ' 00:23:24.669 05:21:21 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:23:24.669 05:21:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.669 05:21:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.669 05:21:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.669 05:21:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.669 05:21:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.669 05:21:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.669 05:21:21 -- paths/export.sh@5 -- # export PATH 00:23:24.669 05:21:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.669 05:21:21 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.669 05:21:21 -- nvmf/common.sh@7 -- # uname -s 00:23:24.669 05:21:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.669 05:21:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.669 05:21:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.669 05:21:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.669 05:21:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.669 05:21:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.669 05:21:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.669 05:21:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.669 05:21:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.669 05:21:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.669 05:21:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:24.669 05:21:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:24.669 05:21:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.669 05:21:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.669 05:21:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:24.669 05:21:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:23:24.669 05:21:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.669 05:21:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.669 05:21:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.669 05:21:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.669 05:21:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.669 05:21:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.669 05:21:21 -- paths/export.sh@5 -- # export PATH 00:23:24.669 05:21:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.669 05:21:21 -- nvmf/common.sh@46 -- # : 0 00:23:24.669 05:21:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:24.669 05:21:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:24.669 05:21:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:24.669 05:21:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.669 05:21:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.669 05:21:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:24.669 05:21:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:24.669 05:21:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:24.669 05:21:21 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:23:24.669 05:21:21 -- host/fio.sh@14 -- # nvmftestinit 00:23:24.669 05:21:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:24.669 05:21:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.669 05:21:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:24.669 05:21:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:24.669 05:21:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:24.669 05:21:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.669 05:21:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.669 05:21:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.669 05:21:21 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:24.669 05:21:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:24.669 05:21:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:24.669 05:21:21 -- common/autotest_common.sh@10 -- # set +x 00:23:29.943 05:21:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:29.943 05:21:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:29.943 05:21:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:29.943 05:21:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:29.943 05:21:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:29.943 05:21:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:29.943 05:21:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:29.943 05:21:26 -- nvmf/common.sh@294 -- # net_devs=() 00:23:29.943 05:21:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:29.943 05:21:26 -- nvmf/common.sh@295 -- # e810=() 00:23:29.943 05:21:26 -- nvmf/common.sh@295 -- # local -ga e810 00:23:29.943 05:21:26 -- nvmf/common.sh@296 -- # x722=() 00:23:29.943 05:21:26 -- nvmf/common.sh@296 -- # local -ga x722 00:23:29.943 05:21:26 -- nvmf/common.sh@297 -- # mlx=() 00:23:29.944 05:21:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:29.944 05:21:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.944 05:21:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:29.944 05:21:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:29.944 05:21:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:29.944 05:21:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:29.944 05:21:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:29.944 05:21:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:29.944 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:29.944 05:21:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:29.944 05:21:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:29.944 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:29.944 05:21:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:29.944 05:21:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:29.944 05:21:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:23:29.944 05:21:26 -- nvmf/common.sh@376 -- # modinfo irdma 00:23:29.944 05:21:26 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:23:29.944 05:21:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.944 05:21:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:29.944 05:21:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.944 05:21:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:29.944 Found net devices under 0000:af:00.0: cvl_0_0 00:23:29.944 05:21:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.944 05:21:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.944 05:21:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:29.944 05:21:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.944 05:21:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:29.944 Found net devices under 0000:af:00.1: cvl_0_1 00:23:29.944 05:21:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.944 05:21:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:29.944 05:21:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:29.944 05:21:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:29.944 05:21:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:29.944 05:21:26 -- nvmf/common.sh@57 -- # uname 00:23:29.944 05:21:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:29.944 05:21:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:29.944 05:21:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:29.944 05:21:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:29.944 05:21:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:29.944 05:21:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:29.944 05:21:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:29.944 05:21:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:29.944 05:21:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:29.944 05:21:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:29.944 05:21:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:29.944 05:21:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:29.944 05:21:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:29.944 05:21:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:29.944 05:21:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:29.944 05:21:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:29.944 05:21:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:23:29.944 05:21:26 -- nvmf/common.sh@104 -- # continue 2 00:23:29.944 05:21:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.944 05:21:26 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:23:29.944 05:21:26 -- nvmf/common.sh@104 -- # continue 2 00:23:29.944 05:21:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:29.944 05:21:26 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:23:29.944 05:21:26 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:23:29.944 05:21:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:23:29.944 05:21:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:29.944 05:21:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:29.944 05:21:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:29.944 05:21:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:23:29.944 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:23:29.944 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:23:29.944 altname enp175s0f0np0 00:23:29.944 altname ens801f0np0 00:23:29.944 inet 192.168.100.8/24 scope global cvl_0_0 00:23:29.944 valid_lft forever preferred_lft forever 00:23:29.944 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:23:29.944 valid_lft forever preferred_lft forever 00:23:29.944 05:21:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:29.944 05:21:26 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:23:29.944 05:21:26 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:23:29.944 05:21:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:23:29.944 05:21:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:29.944 05:21:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:29.944 05:21:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:29.944 05:21:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:23:29.944 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:23:29.944 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:23:29.944 altname enp175s0f1np1 00:23:29.944 altname ens801f1np1 00:23:29.944 inet 192.168.100.9/24 scope global cvl_0_1 00:23:29.944 valid_lft forever preferred_lft forever 00:23:29.944 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:23:29.944 valid_lft forever preferred_lft forever 00:23:29.944 05:21:26 -- nvmf/common.sh@410 -- # return 0 00:23:29.944 05:21:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:29.944 05:21:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:29.944 05:21:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:29.944 05:21:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:29.945 05:21:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:29.945 05:21:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:29.945 05:21:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:29.945 05:21:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:29.945 05:21:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:29.945 05:21:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:29.945 05:21:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:29.945 05:21:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.945 05:21:26 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:23:29.945 05:21:26 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:23:29.945 05:21:26 -- nvmf/common.sh@104 -- # continue 2 00:23:29.945 05:21:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:29.945 05:21:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.945 05:21:26 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:23:29.945 05:21:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.945 05:21:26 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:23:29.945 05:21:26 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:23:29.945 05:21:26 -- nvmf/common.sh@104 -- # continue 2 00:23:29.945 05:21:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:29.945 05:21:26 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:23:29.945 05:21:26 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:23:29.945 05:21:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:23:29.945 05:21:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:29.945 05:21:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:29.945 05:21:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:29.945 05:21:26 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:23:29.945 05:21:26 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:23:29.945 05:21:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:23:29.945 05:21:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:29.945 05:21:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:29.945 05:21:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:29.945 192.168.100.9' 00:23:29.945 05:21:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:29.945 192.168.100.9' 00:23:29.945 05:21:26 -- nvmf/common.sh@445 -- # head -n 1 00:23:29.945 05:21:26 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:29.945 05:21:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:29.945 192.168.100.9' 00:23:29.945 05:21:26 -- nvmf/common.sh@446 -- # tail -n +2 00:23:29.945 05:21:26 -- nvmf/common.sh@446 -- # head -n 1 00:23:29.945 05:21:26 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:29.945 05:21:26 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:29.945 05:21:26 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:29.945 05:21:26 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:29.945 05:21:26 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:29.945 05:21:26 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:29.945 05:21:26 -- host/fio.sh@16 -- # [[ y != y ]] 00:23:29.945 05:21:26 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:29.945 05:21:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.945 05:21:26 -- common/autotest_common.sh@10 -- # set +x 00:23:29.945 05:21:26 -- host/fio.sh@24 -- # nvmfpid=368812 00:23:29.945 05:21:26 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.945 05:21:26 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.945 05:21:26 -- host/fio.sh@28 -- # waitforlisten 368812 00:23:29.945 05:21:26 -- common/autotest_common.sh@829 -- # '[' -z 368812 ']' 00:23:29.945 05:21:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.945 05:21:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.945 05:21:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.945 05:21:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.945 05:21:26 -- common/autotest_common.sh@10 -- # set +x 00:23:29.945 [2024-11-20 05:21:26.679268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:29.945 [2024-11-20 05:21:26.679306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.945 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.945 [2024-11-20 05:21:26.737537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.204 [2024-11-20 05:21:26.814295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:30.204 [2024-11-20 05:21:26.814410] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.204 [2024-11-20 05:21:26.814418] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.204 [2024-11-20 05:21:26.814424] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.204 [2024-11-20 05:21:26.814472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.204 [2024-11-20 05:21:26.814575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.204 [2024-11-20 05:21:26.814663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.204 [2024-11-20 05:21:26.814664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.772 05:21:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.772 05:21:27 -- common/autotest_common.sh@862 -- # return 0 00:23:30.772 05:21:27 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:31.031 [2024-11-20 05:21:27.655690] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xd66100/0xd65740) succeed. 00:23:31.031 [2024-11-20 05:21:27.664643] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xd67470/0xd65cc0) succeed. 00:23:31.031 [2024-11-20 05:21:27.664664] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:23:31.031 05:21:27 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:31.031 05:21:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:31.031 05:21:27 -- common/autotest_common.sh@10 -- # set +x 00:23:31.031 05:21:27 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:31.290 Malloc1 00:23:31.290 05:21:27 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.549 05:21:28 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:31.549 05:21:28 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:31.808 [2024-11-20 05:21:28.478514] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:31.808 05:21:28 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:32.067 05:21:28 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme 00:23:32.067 05:21:28 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:32.067 05:21:28 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:32.067 05:21:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:23:32.067 05:21:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:32.067 05:21:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:23:32.067 05:21:28 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:32.067 05:21:28 -- common/autotest_common.sh@1330 -- # shift 00:23:32.067 05:21:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:23:32.067 05:21:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:23:32.067 05:21:28 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:32.067 05:21:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:23:32.067 05:21:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:23:32.067 05:21:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:23:32.067 05:21:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:23:32.067 05:21:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:23:32.067 05:21:28 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:32.067 05:21:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:23:32.067 05:21:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:23:32.067 05:21:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:23:32.067 05:21:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:23:32.067 05:21:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:32.068 05:21:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:32.326 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:32.326 fio-3.35 00:23:32.326 Starting 1 thread 00:23:32.326 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.862 00:23:34.862 test: (groupid=0, jobs=1): err= 0: pid=369416: Wed Nov 20 05:21:31 2024 00:23:34.862 read: IOPS=18.7k, BW=73.0MiB/s (76.6MB/s)(146MiB/2004msec) 00:23:34.862 slat (nsec): min=1365, max=21699, avg=1511.95, stdev=475.04 00:23:34.862 clat (usec): min=1883, max=6405, avg=3398.20, stdev=91.91 00:23:34.862 lat (usec): min=1900, max=6406, avg=3399.71, stdev=91.86 00:23:34.862 clat percentiles (usec): 00:23:34.862 | 1.00th=[ 3359], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3392], 00:23:34.862 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3392], 00:23:34.862 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3425], 95.00th=[ 3425], 00:23:34.862 | 99.00th=[ 3720], 99.50th=[ 3818], 99.90th=[ 4293], 99.95th=[ 5145], 00:23:34.863 | 99.99th=[ 5997] 00:23:34.863 bw ( KiB/s): min=73608, max=75544, per=100.00%, avg=74832.00, stdev=897.95, samples=4 00:23:34.863 iops : min=18402, max=18886, avg=18708.00, stdev=224.49, samples=4 00:23:34.863 write: IOPS=18.7k, BW=73.1MiB/s (76.6MB/s)(146MiB/2004msec); 0 zone resets 00:23:34.863 slat (nsec): min=1404, max=17985, avg=1593.73, stdev=478.21 00:23:34.863 clat (usec): min=1909, max=6398, avg=3396.98, stdev=99.92 00:23:34.863 lat (usec): min=1918, max=6400, avg=3398.57, stdev=99.87 00:23:34.863 clat percentiles (usec): 00:23:34.863 | 1.00th=[ 3359], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:23:34.863 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3392], 00:23:34.863 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3425], 95.00th=[ 3425], 00:23:34.863 | 99.00th=[ 3720], 99.50th=[ 3818], 99.90th=[ 4686], 99.95th=[ 5604], 00:23:34.863 | 99.99th=[ 6063] 00:23:34.863 bw ( KiB/s): min=73592, max=75624, per=100.00%, avg=74824.00, stdev=889.60, samples=4 00:23:34.863 iops : min=18398, max=18906, avg=18706.00, stdev=222.40, samples=4 00:23:34.863 lat (msec) : 2=0.02%, 4=99.87%, 10=0.12% 00:23:34.863 cpu : usr=99.50%, sys=0.10%, ctx=9, majf=0, minf=2 00:23:34.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:34.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:34.863 issued rwts: total=37472,37483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:34.863 00:23:34.863 Run status group 0 (all jobs): 00:23:34.863 READ: bw=73.0MiB/s (76.6MB/s), 73.0MiB/s-73.0MiB/s (76.6MB/s-76.6MB/s), io=146MiB (153MB), run=2004-2004msec 00:23:34.863 WRITE: bw=73.1MiB/s (76.6MB/s), 73.1MiB/s-73.1MiB/s (76.6MB/s-76.6MB/s), io=146MiB (154MB), run=2004-2004msec 00:23:34.863 05:21:31 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:34.863 05:21:31 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:34.863 05:21:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:23:34.863 05:21:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.863 05:21:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:23:34.863 05:21:31 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:34.863 05:21:31 -- common/autotest_common.sh@1330 -- # shift 00:23:34.863 05:21:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:23:34.863 05:21:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.863 05:21:31 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:34.863 05:21:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:23:34.863 05:21:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:23:34.863 05:21:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:23:34.863 05:21:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:23:34.863 05:21:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.863 05:21:31 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:34.863 05:21:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:23:34.863 05:21:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:23:34.863 05:21:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:23:34.863 05:21:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:23:34.863 05:21:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:34.863 05:21:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:34.863 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:34.863 fio-3.35 00:23:34.863 Starting 1 thread 00:23:35.121 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.652 00:23:37.652 test: (groupid=0, jobs=1): err= 0: pid=369987: Wed Nov 20 05:21:33 2024 00:23:37.652 read: IOPS=15.2k, BW=238MiB/s (249MB/s)(467MiB/1964msec) 00:23:37.652 slat (nsec): min=2280, max=31076, avg=2610.84, stdev=902.60 00:23:37.652 clat (usec): min=392, max=7144, avg=1803.40, stdev=1205.47 00:23:37.652 lat (usec): min=394, max=7147, avg=1806.02, stdev=1205.76 00:23:37.652 clat percentiles (usec): 00:23:37.652 | 1.00th=[ 758], 5.00th=[ 873], 10.00th=[ 947], 20.00th=[ 1045], 00:23:37.652 | 30.00th=[ 1139], 40.00th=[ 1254], 50.00th=[ 1369], 60.00th=[ 1500], 00:23:37.652 | 70.00th=[ 1713], 80.00th=[ 2024], 90.00th=[ 4424], 95.00th=[ 4686], 00:23:37.652 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[ 6849], 99.95th=[ 6980], 00:23:37.652 | 99.99th=[ 7111] 00:23:37.652 bw ( KiB/s): min=103424, max=125216, per=48.25%, avg=117536.00, stdev=9663.47, samples=4 00:23:37.652 iops : min= 6464, max= 7826, avg=7346.00, stdev=603.97, samples=4 00:23:37.652 write: IOPS=8703, BW=136MiB/s (143MB/s)(239MiB/1758msec); 0 zone resets 00:23:37.652 slat (nsec): min=26944, max=90275, avg=29262.94, stdev=4234.85 00:23:37.652 clat (usec): min=3936, max=17618, avg=11249.96, stdev=1625.87 00:23:37.652 lat (usec): min=3963, max=17647, avg=11279.23, stdev=1625.52 00:23:37.652 clat percentiles (usec): 00:23:37.652 | 1.00th=[ 6063], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10028], 00:23:37.652 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:23:37.652 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13304], 95.00th=[13960], 00:23:37.652 | 99.00th=[15270], 99.50th=[16188], 99.90th=[16909], 99.95th=[17171], 00:23:37.652 | 99.99th=[17433] 00:23:37.652 bw ( KiB/s): min=108640, max=129536, per=87.30%, avg=121568.00, stdev=9169.77, samples=4 00:23:37.652 iops : min= 6790, max= 8096, avg=7598.00, stdev=573.11, samples=4 00:23:37.652 lat (usec) : 500=0.01%, 750=0.61%, 1000=9.33% 00:23:37.652 lat (msec) : 2=42.74%, 4=6.35%, 10=13.63%, 20=27.33% 00:23:37.652 cpu : usr=96.36%, sys=2.99%, ctx=109, majf=0, minf=1 00:23:37.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:23:37.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:37.652 issued rwts: total=29899,15301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:37.652 00:23:37.652 Run status group 0 (all jobs): 00:23:37.652 READ: bw=238MiB/s (249MB/s), 238MiB/s-238MiB/s (249MB/s-249MB/s), io=467MiB (490MB), run=1964-1964msec 00:23:37.652 WRITE: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=239MiB (251MB), run=1758-1758msec 00:23:37.653 05:21:33 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.653 05:21:34 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:23:37.653 05:21:34 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:23:37.653 05:21:34 -- host/fio.sh@51 -- # get_nvme_bdfs 00:23:37.653 05:21:34 -- common/autotest_common.sh@1508 -- # bdfs=() 00:23:37.653 05:21:34 -- common/autotest_common.sh@1508 -- # local bdfs 00:23:37.653 05:21:34 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:37.653 05:21:34 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:37.653 05:21:34 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:23:37.653 05:21:34 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:23:37.653 05:21:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0 00:23:37.653 05:21:34 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 192.168.100.8 00:23:40.941 Nvme0n1 00:23:40.941 05:21:37 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:23:43.476 05:21:40 -- host/fio.sh@53 -- # ls_guid=2c1d8208-b690-4e66-afdb-47a18d276056 00:23:43.476 05:21:40 -- host/fio.sh@54 -- # get_lvs_free_mb 2c1d8208-b690-4e66-afdb-47a18d276056 00:23:43.476 05:21:40 -- common/autotest_common.sh@1353 -- # local lvs_uuid=2c1d8208-b690-4e66-afdb-47a18d276056 00:23:43.476 05:21:40 -- common/autotest_common.sh@1354 -- # local lvs_info 00:23:43.476 05:21:40 -- common/autotest_common.sh@1355 -- # local fc 00:23:43.476 05:21:40 -- common/autotest_common.sh@1356 -- # local cs 00:23:43.476 05:21:40 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:43.735 05:21:40 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:23:43.735 { 00:23:43.735 "uuid": "2c1d8208-b690-4e66-afdb-47a18d276056", 00:23:43.735 "name": "lvs_0", 00:23:43.735 "base_bdev": "Nvme0n1", 00:23:43.735 "total_data_clusters": 930, 00:23:43.735 "free_clusters": 930, 00:23:43.735 "block_size": 512, 00:23:43.735 "cluster_size": 1073741824 00:23:43.735 } 00:23:43.735 ]' 00:23:43.735 05:21:40 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="2c1d8208-b690-4e66-afdb-47a18d276056") .free_clusters' 00:23:43.735 05:21:40 -- common/autotest_common.sh@1358 -- # fc=930 00:23:43.736 05:21:40 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="2c1d8208-b690-4e66-afdb-47a18d276056") .cluster_size' 00:23:43.736 05:21:40 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:23:43.736 05:21:40 -- common/autotest_common.sh@1362 -- # free_mb=952320 00:23:43.736 05:21:40 -- common/autotest_common.sh@1363 -- # echo 952320 00:23:43.736 952320 00:23:43.736 05:21:40 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:23:43.994 ff49d70c-e32f-4179-bd08-007672d7f08d 00:23:43.994 05:21:40 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:23:44.253 05:21:40 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:23:44.513 05:21:41 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:23:44.513 05:21:41 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:44.513 05:21:41 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:44.513 05:21:41 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:23:44.513 05:21:41 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:44.513 05:21:41 -- common/autotest_common.sh@1328 -- # local sanitizers 00:23:44.513 05:21:41 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:44.513 05:21:41 -- common/autotest_common.sh@1330 -- # shift 00:23:44.513 05:21:41 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:23:44.513 05:21:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.513 05:21:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:23:44.513 05:21:41 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:44.513 05:21:41 -- common/autotest_common.sh@1334 -- # grep libasan 00:23:44.785 05:21:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:23:44.786 05:21:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:23:44.786 05:21:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.786 05:21:41 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:44.786 05:21:41 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:23:44.786 05:21:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:23:44.786 05:21:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:23:44.786 05:21:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:23:44.786 05:21:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:44.786 05:21:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:45.046 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:45.046 fio-3.35 00:23:45.046 Starting 1 thread 00:23:45.046 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.568 00:23:47.568 test: (groupid=0, jobs=1): err= 0: pid=371744: Wed Nov 20 05:21:43 2024 00:23:47.568 read: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(88.1MiB/2005msec) 00:23:47.568 slat (nsec): min=1363, max=14653, avg=1495.14, stdev=218.47 00:23:47.568 clat (usec): min=433, max=168630, avg=5674.75, stdev=8824.37 00:23:47.568 lat (usec): min=435, max=168645, avg=5676.24, stdev=8824.41 00:23:47.568 clat percentiles (msec): 00:23:47.568 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:23:47.568 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:23:47.568 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:23:47.568 | 99.00th=[ 6], 99.50th=[ 6], 99.90th=[ 169], 99.95th=[ 169], 00:23:47.568 | 99.99th=[ 169] 00:23:47.568 bw ( KiB/s): min=31616, max=49624, per=99.97%, avg=44998.00, stdev=8922.44, samples=4 00:23:47.568 iops : min= 7904, max=12406, avg=11249.50, stdev=2230.61, samples=4 00:23:47.568 write: IOPS=11.2k, BW=43.7MiB/s (45.9MB/s)(87.7MiB/2005msec); 0 zone resets 00:23:47.568 slat (nsec): min=1403, max=17456, avg=1576.68, stdev=302.69 00:23:47.568 clat (usec): min=132, max=168836, avg=5625.66, stdev=8246.90 00:23:47.568 lat (usec): min=134, max=168839, avg=5627.23, stdev=8246.96 00:23:47.568 clat percentiles (msec): 00:23:47.568 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:23:47.568 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:23:47.568 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:23:47.568 | 99.00th=[ 6], 99.50th=[ 6], 99.90th=[ 169], 99.95th=[ 169], 00:23:47.568 | 99.99th=[ 169] 00:23:47.568 bw ( KiB/s): min=32312, max=49408, per=99.96%, avg=44768.00, stdev=8311.51, samples=4 00:23:47.568 iops : min= 8078, max=12352, avg=11192.00, stdev=2077.88, samples=4 00:23:47.568 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.03% 00:23:47.568 lat (msec) : 2=0.03%, 4=0.24%, 10=99.39%, 250=0.28% 00:23:47.568 cpu : usr=99.55%, sys=0.10%, ctx=8, majf=0, minf=2 00:23:47.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:47.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.568 issued rwts: total=22562,22448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.568 00:23:47.568 Run status group 0 (all jobs): 00:23:47.568 READ: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=88.1MiB (92.4MB), run=2005-2005msec 00:23:47.568 WRITE: bw=43.7MiB/s (45.9MB/s), 43.7MiB/s-43.7MiB/s (45.9MB/s-45.9MB/s), io=87.7MiB (91.9MB), run=2005-2005msec 00:23:47.568 05:21:43 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:47.568 05:21:44 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:23:48.502 05:21:45 -- host/fio.sh@64 -- # ls_nested_guid=18ec36d5-f640-4128-8853-39cdf96ef149 00:23:48.502 05:21:45 -- host/fio.sh@65 -- # get_lvs_free_mb 18ec36d5-f640-4128-8853-39cdf96ef149 00:23:48.502 05:21:45 -- common/autotest_common.sh@1353 -- # local lvs_uuid=18ec36d5-f640-4128-8853-39cdf96ef149 00:23:48.502 05:21:45 -- common/autotest_common.sh@1354 -- # local lvs_info 00:23:48.502 05:21:45 -- common/autotest_common.sh@1355 -- # local fc 00:23:48.502 05:21:45 -- common/autotest_common.sh@1356 -- # local cs 00:23:48.502 05:21:45 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:48.760 05:21:45 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:23:48.760 { 00:23:48.760 "uuid": "2c1d8208-b690-4e66-afdb-47a18d276056", 00:23:48.760 "name": "lvs_0", 00:23:48.760 "base_bdev": "Nvme0n1", 00:23:48.760 "total_data_clusters": 930, 00:23:48.760 "free_clusters": 0, 00:23:48.760 "block_size": 512, 00:23:48.760 "cluster_size": 1073741824 00:23:48.760 }, 00:23:48.760 { 00:23:48.760 "uuid": "18ec36d5-f640-4128-8853-39cdf96ef149", 00:23:48.760 "name": "lvs_n_0", 00:23:48.760 "base_bdev": "ff49d70c-e32f-4179-bd08-007672d7f08d", 00:23:48.760 "total_data_clusters": 237847, 00:23:48.760 "free_clusters": 237847, 00:23:48.760 "block_size": 512, 00:23:48.760 "cluster_size": 4194304 00:23:48.760 } 00:23:48.760 ]' 00:23:48.760 05:21:45 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="18ec36d5-f640-4128-8853-39cdf96ef149") .free_clusters' 00:23:48.760 05:21:45 -- common/autotest_common.sh@1358 -- # fc=237847 00:23:48.760 05:21:45 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="18ec36d5-f640-4128-8853-39cdf96ef149") .cluster_size' 00:23:48.760 05:21:45 -- common/autotest_common.sh@1359 -- # cs=4194304 00:23:48.760 05:21:45 -- common/autotest_common.sh@1362 -- # free_mb=951388 00:23:48.760 05:21:45 -- common/autotest_common.sh@1363 -- # echo 951388 00:23:48.760 951388 00:23:48.760 05:21:45 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:23:49.325 594676df-30a6-455a-8920-c9d3dc7284c2 00:23:49.325 05:21:46 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:23:49.583 05:21:46 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:23:49.583 05:21:46 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:23:49.841 05:21:46 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:49.841 05:21:46 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:49.841 05:21:46 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:23:49.841 05:21:46 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:49.841 05:21:46 -- common/autotest_common.sh@1328 -- # local sanitizers 00:23:49.841 05:21:46 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:49.841 05:21:46 -- common/autotest_common.sh@1330 -- # shift 00:23:49.841 05:21:46 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:23:49.841 05:21:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:23:49.841 05:21:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:23:49.841 05:21:46 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:49.841 05:21:46 -- common/autotest_common.sh@1334 -- # grep libasan 00:23:49.841 05:21:46 -- common/autotest_common.sh@1334 -- # asan_lib= 00:23:49.841 05:21:46 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:23:49.841 05:21:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:23:49.841 05:21:46 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:23:49.841 05:21:46 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:23:49.841 05:21:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:23:49.841 05:21:46 -- common/autotest_common.sh@1334 -- # asan_lib= 00:23:49.841 05:21:46 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:23:49.841 05:21:46 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:49.841 05:21:46 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:50.099 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:50.099 fio-3.35 00:23:50.099 Starting 1 thread 00:23:50.099 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.626 00:23:52.626 test: (groupid=0, jobs=1): err= 0: pid=372780: Wed Nov 20 05:21:49 2024 00:23:52.626 read: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(83.0MiB/2005msec) 00:23:52.626 slat (nsec): min=1370, max=17350, avg=1504.44, stdev=408.69 00:23:52.626 clat (usec): min=2411, max=10461, avg=5979.98, stdev=182.71 00:23:52.626 lat (usec): min=2417, max=10462, avg=5981.48, stdev=182.68 00:23:52.626 clat percentiles (usec): 00:23:52.626 | 1.00th=[ 5866], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5932], 00:23:52.626 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 5997], 60.00th=[ 5997], 00:23:52.626 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 5997], 95.00th=[ 6063], 00:23:52.626 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 8979], 99.95th=[ 9634], 00:23:52.626 | 99.99th=[10421] 00:23:52.626 bw ( KiB/s): min=41080, max=43032, per=99.92%, avg=42368.00, stdev=894.83, samples=4 00:23:52.626 iops : min=10270, max=10758, avg=10592.00, stdev=223.71, samples=4 00:23:52.626 write: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(83.0MiB/2005msec); 0 zone resets 00:23:52.626 slat (nsec): min=1409, max=17796, avg=1583.07, stdev=422.52 00:23:52.626 clat (usec): min=3586, max=10473, avg=5997.42, stdev=165.83 00:23:52.626 lat (usec): min=3589, max=10474, avg=5999.01, stdev=165.80 00:23:52.626 clat percentiles (usec): 00:23:52.626 | 1.00th=[ 5932], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5997], 00:23:52.626 | 30.00th=[ 5997], 40.00th=[ 5997], 50.00th=[ 5997], 60.00th=[ 5997], 00:23:52.626 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 6063], 95.00th=[ 6063], 00:23:52.626 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 8225], 99.95th=[ 9634], 00:23:52.626 | 99.99th=[10421] 00:23:52.626 bw ( KiB/s): min=41560, max=42816, per=99.98%, avg=42358.00, stdev=551.53, samples=4 00:23:52.626 iops : min=10390, max=10704, avg=10589.50, stdev=137.88, samples=4 00:23:52.626 lat (msec) : 4=0.09%, 10=99.88%, 20=0.03% 00:23:52.626 cpu : usr=99.55%, sys=0.10%, ctx=19, majf=0, minf=2 00:23:52.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:52.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:52.626 issued rwts: total=21254,21236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:52.626 00:23:52.626 Run status group 0 (all jobs): 00:23:52.626 READ: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=83.0MiB (87.1MB), run=2005-2005msec 00:23:52.626 WRITE: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=83.0MiB (87.0MB), run=2005-2005msec 00:23:52.626 05:21:49 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:52.626 05:21:49 -- host/fio.sh@74 -- # sync 00:23:52.626 05:21:49 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:23:56.808 05:21:53 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:56.808 05:21:53 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:23:59.336 05:21:56 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:59.594 05:21:56 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:01.493 05:21:58 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:01.493 05:21:58 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:01.493 05:21:58 -- host/fio.sh@86 -- # nvmftestfini 00:24:01.493 05:21:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:01.493 05:21:58 -- nvmf/common.sh@116 -- # sync 00:24:01.493 05:21:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:01.493 05:21:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:01.493 05:21:58 -- nvmf/common.sh@119 -- # set +e 00:24:01.493 05:21:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:01.493 05:21:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:01.493 rmmod nvme_rdma 00:24:01.493 rmmod nvme_fabrics 00:24:01.493 05:21:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:01.493 05:21:58 -- nvmf/common.sh@123 -- # set -e 00:24:01.493 05:21:58 -- nvmf/common.sh@124 -- # return 0 00:24:01.493 05:21:58 -- nvmf/common.sh@477 -- # '[' -n 368812 ']' 00:24:01.493 05:21:58 -- nvmf/common.sh@478 -- # killprocess 368812 00:24:01.493 05:21:58 -- common/autotest_common.sh@936 -- # '[' -z 368812 ']' 00:24:01.493 05:21:58 -- common/autotest_common.sh@940 -- # kill -0 368812 00:24:01.493 05:21:58 -- common/autotest_common.sh@941 -- # uname 00:24:01.493 05:21:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.493 05:21:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 368812 00:24:01.493 05:21:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:01.493 05:21:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:01.493 05:21:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 368812' 00:24:01.493 killing process with pid 368812 00:24:01.493 05:21:58 -- common/autotest_common.sh@955 -- # kill 368812 00:24:01.493 05:21:58 -- common/autotest_common.sh@960 -- # wait 368812 00:24:01.752 05:21:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:01.752 05:21:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:01.752 00:24:01.752 real 0m37.402s 00:24:01.752 user 2m42.137s 00:24:01.752 sys 0m5.976s 00:24:01.752 05:21:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:01.752 05:21:58 -- common/autotest_common.sh@10 -- # set +x 00:24:01.752 ************************************ 00:24:01.752 END TEST nvmf_fio_host 00:24:01.752 ************************************ 00:24:01.752 05:21:58 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:01.752 05:21:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:01.752 05:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:01.752 05:21:58 -- common/autotest_common.sh@10 -- # set +x 00:24:01.752 ************************************ 00:24:01.752 START TEST nvmf_failover 00:24:01.752 ************************************ 00:24:01.752 05:21:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:02.011 * Looking for test storage... 00:24:02.011 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:24:02.011 05:21:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:02.011 05:21:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:02.011 05:21:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:02.011 05:21:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:02.011 05:21:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:02.011 05:21:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:02.011 05:21:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:02.011 05:21:58 -- scripts/common.sh@335 -- # IFS=.-: 00:24:02.011 05:21:58 -- scripts/common.sh@335 -- # read -ra ver1 00:24:02.011 05:21:58 -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.011 05:21:58 -- scripts/common.sh@336 -- # read -ra ver2 00:24:02.011 05:21:58 -- scripts/common.sh@337 -- # local 'op=<' 00:24:02.011 05:21:58 -- scripts/common.sh@339 -- # ver1_l=2 00:24:02.011 05:21:58 -- scripts/common.sh@340 -- # ver2_l=1 00:24:02.011 05:21:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:02.011 05:21:58 -- scripts/common.sh@343 -- # case "$op" in 00:24:02.011 05:21:58 -- scripts/common.sh@344 -- # : 1 00:24:02.011 05:21:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:02.011 05:21:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.011 05:21:58 -- scripts/common.sh@364 -- # decimal 1 00:24:02.011 05:21:58 -- scripts/common.sh@352 -- # local d=1 00:24:02.011 05:21:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.011 05:21:58 -- scripts/common.sh@354 -- # echo 1 00:24:02.011 05:21:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:02.011 05:21:58 -- scripts/common.sh@365 -- # decimal 2 00:24:02.011 05:21:58 -- scripts/common.sh@352 -- # local d=2 00:24:02.011 05:21:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.011 05:21:58 -- scripts/common.sh@354 -- # echo 2 00:24:02.011 05:21:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:02.011 05:21:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.011 05:21:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.011 05:21:58 -- scripts/common.sh@367 -- # return 0 00:24:02.011 05:21:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.011 05:21:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:02.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.011 --rc genhtml_branch_coverage=1 00:24:02.011 --rc genhtml_function_coverage=1 00:24:02.011 --rc genhtml_legend=1 00:24:02.011 --rc geninfo_all_blocks=1 00:24:02.011 --rc geninfo_unexecuted_blocks=1 00:24:02.011 00:24:02.011 ' 00:24:02.011 05:21:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:02.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.011 --rc genhtml_branch_coverage=1 00:24:02.011 --rc genhtml_function_coverage=1 00:24:02.011 --rc genhtml_legend=1 00:24:02.011 --rc geninfo_all_blocks=1 00:24:02.011 --rc geninfo_unexecuted_blocks=1 00:24:02.011 00:24:02.011 ' 00:24:02.011 05:21:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:02.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.011 --rc genhtml_branch_coverage=1 00:24:02.011 --rc genhtml_function_coverage=1 00:24:02.011 --rc genhtml_legend=1 00:24:02.011 --rc geninfo_all_blocks=1 00:24:02.011 --rc geninfo_unexecuted_blocks=1 00:24:02.011 00:24:02.011 ' 00:24:02.011 05:21:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:02.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.011 --rc genhtml_branch_coverage=1 00:24:02.011 --rc genhtml_function_coverage=1 00:24:02.011 --rc genhtml_legend=1 00:24:02.011 --rc geninfo_all_blocks=1 00:24:02.011 --rc geninfo_unexecuted_blocks=1 00:24:02.011 00:24:02.011 ' 00:24:02.011 05:21:58 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.011 05:21:58 -- nvmf/common.sh@7 -- # uname -s 00:24:02.011 05:21:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.011 05:21:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.011 05:21:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.011 05:21:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.011 05:21:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.011 05:21:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.011 05:21:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.011 05:21:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.011 05:21:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.011 05:21:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.011 05:21:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:02.011 05:21:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:02.011 05:21:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.011 05:21:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.011 05:21:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:02.011 05:21:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:24:02.011 05:21:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.011 05:21:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.011 05:21:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.012 05:21:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.012 05:21:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.012 05:21:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.012 05:21:58 -- paths/export.sh@5 -- # export PATH 00:24:02.012 05:21:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.012 05:21:58 -- nvmf/common.sh@46 -- # : 0 00:24:02.012 05:21:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.012 05:21:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.012 05:21:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.012 05:21:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.012 05:21:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.012 05:21:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.012 05:21:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.012 05:21:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.012 05:21:58 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.012 05:21:58 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.012 05:21:58 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:24:02.012 05:21:58 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.012 05:21:58 -- host/failover.sh@18 -- # nvmftestinit 00:24:02.012 05:21:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:02.012 05:21:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.012 05:21:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.012 05:21:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.012 05:21:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.012 05:21:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.012 05:21:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.012 05:21:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.012 05:21:58 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:02.012 05:21:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:02.012 05:21:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:02.012 05:21:58 -- common/autotest_common.sh@10 -- # set +x 00:24:07.298 05:22:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:07.298 05:22:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:07.298 05:22:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:07.298 05:22:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:07.298 05:22:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:07.298 05:22:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:07.298 05:22:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:07.298 05:22:03 -- nvmf/common.sh@294 -- # net_devs=() 00:24:07.298 05:22:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:07.298 05:22:03 -- nvmf/common.sh@295 -- # e810=() 00:24:07.298 05:22:03 -- nvmf/common.sh@295 -- # local -ga e810 00:24:07.298 05:22:03 -- nvmf/common.sh@296 -- # x722=() 00:24:07.298 05:22:03 -- nvmf/common.sh@296 -- # local -ga x722 00:24:07.298 05:22:03 -- nvmf/common.sh@297 -- # mlx=() 00:24:07.298 05:22:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:07.298 05:22:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.298 05:22:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:07.298 05:22:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:07.298 05:22:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:07.298 05:22:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:07.298 05:22:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:07.298 05:22:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:07.298 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:07.298 05:22:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:07.298 05:22:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:07.298 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:07.298 05:22:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:07.298 05:22:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:07.298 05:22:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:24:07.298 05:22:03 -- nvmf/common.sh@376 -- # modinfo irdma 00:24:07.298 05:22:03 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:24:07.298 05:22:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.298 05:22:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:07.298 05:22:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.298 05:22:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:07.298 Found net devices under 0000:af:00.0: cvl_0_0 00:24:07.298 05:22:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.298 05:22:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.298 05:22:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:07.298 05:22:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.298 05:22:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:07.298 Found net devices under 0000:af:00.1: cvl_0_1 00:24:07.298 05:22:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.298 05:22:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:07.298 05:22:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:07.298 05:22:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:07.298 05:22:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:07.298 05:22:03 -- nvmf/common.sh@57 -- # uname 00:24:07.298 05:22:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:07.298 05:22:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:07.298 05:22:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:07.298 05:22:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:07.298 05:22:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:07.298 05:22:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:07.298 05:22:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:07.298 05:22:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:07.298 05:22:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:07.298 05:22:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:07.298 05:22:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:07.298 05:22:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:07.298 05:22:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:07.298 05:22:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:07.298 05:22:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:07.298 05:22:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:07.298 05:22:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:24:07.298 05:22:03 -- nvmf/common.sh@104 -- # continue 2 00:24:07.298 05:22:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.298 05:22:03 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:07.298 05:22:03 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:24:07.298 05:22:03 -- nvmf/common.sh@104 -- # continue 2 00:24:07.298 05:22:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:07.298 05:22:03 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:24:07.298 05:22:03 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:24:07.298 05:22:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:24:07.298 05:22:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:07.298 05:22:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:07.298 05:22:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:07.299 05:22:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:07.299 05:22:03 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:24:07.299 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:07.299 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:24:07.299 altname enp175s0f0np0 00:24:07.299 altname ens801f0np0 00:24:07.299 inet 192.168.100.8/24 scope global cvl_0_0 00:24:07.299 valid_lft forever preferred_lft forever 00:24:07.299 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:24:07.299 valid_lft forever preferred_lft forever 00:24:07.299 05:22:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:07.299 05:22:03 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:24:07.299 05:22:03 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:07.299 05:22:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:07.299 05:22:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:07.299 05:22:03 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:24:07.299 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:07.299 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:24:07.299 altname enp175s0f1np1 00:24:07.299 altname ens801f1np1 00:24:07.299 inet 192.168.100.9/24 scope global cvl_0_1 00:24:07.299 valid_lft forever preferred_lft forever 00:24:07.299 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:24:07.299 valid_lft forever preferred_lft forever 00:24:07.299 05:22:03 -- nvmf/common.sh@410 -- # return 0 00:24:07.299 05:22:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:07.299 05:22:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:07.299 05:22:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:07.299 05:22:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:07.299 05:22:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:07.299 05:22:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:07.299 05:22:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:07.299 05:22:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:07.299 05:22:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:07.299 05:22:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:07.299 05:22:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:07.299 05:22:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.299 05:22:03 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:07.299 05:22:03 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:24:07.299 05:22:03 -- nvmf/common.sh@104 -- # continue 2 00:24:07.299 05:22:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:07.299 05:22:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.299 05:22:03 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:07.299 05:22:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.299 05:22:03 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:07.299 05:22:03 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:24:07.299 05:22:03 -- nvmf/common.sh@104 -- # continue 2 00:24:07.299 05:22:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:07.299 05:22:03 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:24:07.299 05:22:03 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:07.299 05:22:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:07.299 05:22:03 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:24:07.299 05:22:03 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:07.299 05:22:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:07.299 05:22:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:07.299 192.168.100.9' 00:24:07.299 05:22:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:07.299 192.168.100.9' 00:24:07.299 05:22:03 -- nvmf/common.sh@445 -- # head -n 1 00:24:07.299 05:22:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:07.299 05:22:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:07.299 192.168.100.9' 00:24:07.299 05:22:03 -- nvmf/common.sh@446 -- # head -n 1 00:24:07.299 05:22:03 -- nvmf/common.sh@446 -- # tail -n +2 00:24:07.299 05:22:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:07.299 05:22:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:07.299 05:22:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:07.299 05:22:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:07.299 05:22:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:07.299 05:22:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:07.299 05:22:03 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:07.299 05:22:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:07.299 05:22:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.299 05:22:03 -- common/autotest_common.sh@10 -- # set +x 00:24:07.299 05:22:04 -- nvmf/common.sh@469 -- # nvmfpid=377674 00:24:07.299 05:22:04 -- nvmf/common.sh@470 -- # waitforlisten 377674 00:24:07.299 05:22:04 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:07.299 05:22:04 -- common/autotest_common.sh@829 -- # '[' -z 377674 ']' 00:24:07.299 05:22:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.299 05:22:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.299 05:22:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.299 05:22:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.299 05:22:04 -- common/autotest_common.sh@10 -- # set +x 00:24:07.299 [2024-11-20 05:22:04.045525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:07.299 [2024-11-20 05:22:04.045577] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.299 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.299 [2024-11-20 05:22:04.101820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:07.559 [2024-11-20 05:22:04.177699] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:07.559 [2024-11-20 05:22:04.177806] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.559 [2024-11-20 05:22:04.177817] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.559 [2024-11-20 05:22:04.177823] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.559 [2024-11-20 05:22:04.177923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.559 [2024-11-20 05:22:04.178009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.559 [2024-11-20 05:22:04.178010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.128 05:22:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.128 05:22:04 -- common/autotest_common.sh@862 -- # return 0 00:24:08.128 05:22:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:08.128 05:22:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.128 05:22:04 -- common/autotest_common.sh@10 -- # set +x 00:24:08.128 05:22:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.128 05:22:04 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:08.387 [2024-11-20 05:22:05.091416] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6928d0/0x691f10) succeed. 00:24:08.387 [2024-11-20 05:22:05.100282] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x693bc0/0x692490) succeed. 00:24:08.387 [2024-11-20 05:22:05.100303] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:24:08.387 05:22:05 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:08.646 Malloc0 00:24:08.646 05:22:05 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.907 05:22:05 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.907 05:22:05 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:09.166 [2024-11-20 05:22:05.890181] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:09.167 05:22:05 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:09.426 [2024-11-20 05:22:06.074822] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:09.427 05:22:06 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:09.687 [2024-11-20 05:22:06.255494] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:09.687 05:22:06 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:09.687 05:22:06 -- host/failover.sh@31 -- # bdevperf_pid=377942 00:24:09.687 05:22:06 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.687 05:22:06 -- host/failover.sh@34 -- # waitforlisten 377942 /var/tmp/bdevperf.sock 00:24:09.687 05:22:06 -- common/autotest_common.sh@829 -- # '[' -z 377942 ']' 00:24:09.687 05:22:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.687 05:22:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.687 05:22:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.687 05:22:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.687 05:22:06 -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 05:22:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.623 05:22:07 -- common/autotest_common.sh@862 -- # return 0 00:24:10.623 05:22:07 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:10.623 NVMe0n1 00:24:10.623 05:22:07 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:10.881 00:24:10.881 05:22:07 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.881 05:22:07 -- host/failover.sh@39 -- # run_test_pid=378175 00:24:10.881 05:22:07 -- host/failover.sh@41 -- # sleep 1 00:24:11.819 05:22:08 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:12.079 05:22:08 -- host/failover.sh@45 -- # sleep 3 00:24:15.369 05:22:11 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:15.369 00:24:15.369 05:22:12 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:15.628 05:22:12 -- host/failover.sh@50 -- # sleep 3 00:24:18.916 05:22:15 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:18.917 [2024-11-20 05:22:15.427936] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:18.917 05:22:15 -- host/failover.sh@55 -- # sleep 1 00:24:19.854 05:22:16 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:19.854 05:22:16 -- host/failover.sh@59 -- # wait 378175 00:24:26.431 0 00:24:26.431 05:22:22 -- host/failover.sh@61 -- # killprocess 377942 00:24:26.431 05:22:22 -- common/autotest_common.sh@936 -- # '[' -z 377942 ']' 00:24:26.431 05:22:22 -- common/autotest_common.sh@940 -- # kill -0 377942 00:24:26.431 05:22:22 -- common/autotest_common.sh@941 -- # uname 00:24:26.431 05:22:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:26.431 05:22:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 377942 00:24:26.431 05:22:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:26.431 05:22:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:26.431 05:22:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 377942' 00:24:26.431 killing process with pid 377942 00:24:26.431 05:22:22 -- common/autotest_common.sh@955 -- # kill 377942 00:24:26.431 05:22:22 -- common/autotest_common.sh@960 -- # wait 377942 00:24:26.431 05:22:23 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:26.431 [2024-11-20 05:22:06.311217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:26.431 [2024-11-20 05:22:06.311268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377942 ] 00:24:26.431 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.431 [2024-11-20 05:22:06.366613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.431 [2024-11-20 05:22:06.439069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.431 Running I/O for 15 seconds... 00:24:26.431 [2024-11-20 05:22:09.316074] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:26.431 [2024-11-20 05:22:09.316119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0xbf0bfe7e 00:24:26.431 [2024-11-20 05:22:09.316129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0xbf0bfe7e 00:24:26.431 [2024-11-20 05:22:09.316153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.431 [2024-11-20 05:22:09.316169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0xbf0bfe7e 00:24:26.431 [2024-11-20 05:22:09.316185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0xdc987c88 00:24:26.431 [2024-11-20 05:22:09.316200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0xbf0bfe7e 00:24:26.431 [2024-11-20 05:22:09.316216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0xdc987c88 00:24:26.431 [2024-11-20 05:22:09.316231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.431 [2024-11-20 05:22:09.316245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.431 [2024-11-20 05:22:09.316259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0xbf0bfe7e 00:24:26.431 [2024-11-20 05:22:09.316280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.431 [2024-11-20 05:22:09.316288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0xbf0bfe7e 00:24:26.432 [2024-11-20 05:22:09.316766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0xdc987c88 00:24:26.432 [2024-11-20 05:22:09.316796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.432 [2024-11-20 05:22:09.316810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.432 [2024-11-20 05:22:09.316819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.316825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.316839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.316854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.316868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.316882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.316896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.316910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.316925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.316939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.316953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.316969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.316983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.316991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.316997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.317040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.317074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.317092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.317121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.317151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.317165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.317180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.317209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.317252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.317266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.433 [2024-11-20 05:22:09.317294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0xdc987c88 00:24:26.433 [2024-11-20 05:22:09.317326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0xbf0bfe7e 00:24:26.433 [2024-11-20 05:22:09.317341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.433 [2024-11-20 05:22:09.317349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0xdc987c88 00:24:26.434 [2024-11-20 05:22:09.317833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0xbf0bfe7e 00:24:26.434 [2024-11-20 05:22:09.317862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.434 [2024-11-20 05:22:09.317870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.434 [2024-11-20 05:22:09.317876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.317884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0xdc987c88 00:24:26.435 [2024-11-20 05:22:09.317890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.317898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0xdc987c88 00:24:26.435 [2024-11-20 05:22:09.317905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.317912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0xbf0bfe7e 00:24:26.435 [2024-11-20 05:22:09.317919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.317927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0xdc987c88 00:24:26.435 [2024-11-20 05:22:09.317933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.317941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.435 [2024-11-20 05:22:09.317947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.317955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.435 [2024-11-20 05:22:09.317961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.317969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0xbf0bfe7e 00:24:26.435 [2024-11-20 05:22:09.317977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.317985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.435 [2024-11-20 05:22:09.317991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4a40 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.318278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.435 [2024-11-20 05:22:09.318288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.435 [2024-11-20 05:22:09.318294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86856 len:8 PRP1 0x0 PRP2 0x0 00:24:26.435 [2024-11-20 05:22:09.318303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:09.318338] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:24:26.435 [2024-11-20 05:22:09.318352] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:26.435 [2024-11-20 05:22:09.318359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.435 [2024-11-20 05:22:09.320239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.435 [2024-11-20 05:22:09.320272] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:26.435 [2024-11-20 05:22:09.333599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.435 [2024-11-20 05:22:09.377601] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:26.435 [2024-11-20 05:22:12.773085] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:26.435 [2024-11-20 05:22:12.773129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.435 [2024-11-20 05:22:12.773139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.435 [2024-11-20 05:22:12.773206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x7b95f3ef 00:24:26.435 [2024-11-20 05:22:12.773240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x7b95f3ef 00:24:26.435 [2024-11-20 05:22:12.773255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x7b95f3ef 00:24:26.435 [2024-11-20 05:22:12.773269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x7b95f3ef 00:24:26.435 [2024-11-20 05:22:12.773298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.435 [2024-11-20 05:22:12.773312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x7b95f3ef 00:24:26.435 [2024-11-20 05:22:12.773341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.435 [2024-11-20 05:22:12.773369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x7b95f3ef 00:24:26.435 [2024-11-20 05:22:12.773386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.435 [2024-11-20 05:22:12.773429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x82280139 00:24:26.435 [2024-11-20 05:22:12.773443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.435 [2024-11-20 05:22:12.773451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x7b95f3ef 00:24:26.435 [2024-11-20 05:22:12.773458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x82280139 00:24:26.436 [2024-11-20 05:22:12.773806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x7b95f3ef 00:24:26.436 [2024-11-20 05:22:12.773876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.436 [2024-11-20 05:22:12.773888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.436 [2024-11-20 05:22:12.773894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.773902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.773908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.773916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.773923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.773931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.773938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.773945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.773952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.773960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.773966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.773974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.773980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.773988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.773994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.774074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.774131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.774201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.774247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.774318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.774333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.774347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x82280139 00:24:26.437 [2024-11-20 05:22:12.774376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x7b95f3ef 00:24:26.437 [2024-11-20 05:22:12.774405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.437 [2024-11-20 05:22:12.774428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.437 [2024-11-20 05:22:12.774434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x7b95f3ef 00:24:26.438 [2024-11-20 05:22:12.774910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.774925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.774932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.438 [2024-11-20 05:22:12.774940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.783510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x82280139 00:24:26.438 [2024-11-20 05:22:12.783519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.438 [2024-11-20 05:22:12.783528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x82280139 00:24:26.439 [2024-11-20 05:22:12.783535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:12.783543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x7b95f3ef 00:24:26.439 [2024-11-20 05:22:12.783550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:12.783811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.439 [2024-11-20 05:22:12.783821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.439 [2024-11-20 05:22:12.783827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:8 PRP1 0x0 PRP2 0x0 00:24:26.439 [2024-11-20 05:22:12.783834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:12.783868] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:26.439 [2024-11-20 05:22:12.783878] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:24:26.439 [2024-11-20 05:22:12.783885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.439 [2024-11-20 05:22:12.783910] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:26.439 [2024-11-20 05:22:12.783920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.439 [2024-11-20 05:22:12.783926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:228e570 sqhd:d0c0 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:12.783934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.439 [2024-11-20 05:22:12.783942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:228e570 sqhd:d0c0 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:12.783949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.439 [2024-11-20 05:22:12.783955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:228e570 sqhd:d0c0 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:12.783962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.439 [2024-11-20 05:22:12.783968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:228e570 sqhd:d0c0 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:12.800438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.439 [2024-11-20 05:22:12.800454] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:26.439 [2024-11-20 05:22:12.800461] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:26.439 [2024-11-20 05:22:12.802256] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.439 [2024-11-20 05:22:12.850096] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:26.439 [2024-11-20 05:22:17.189069] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:26.439 [2024-11-20 05:22:17.189114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.439 [2024-11-20 05:22:17.189160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.439 [2024-11-20 05:22:17.189239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.439 [2024-11-20 05:22:17.189310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0xa82bb4ef 00:24:26.439 [2024-11-20 05:22:17.189442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.439 [2024-11-20 05:22:17.189480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x2a8de67b 00:24:26.439 [2024-11-20 05:22:17.189486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x2a8de67b 00:24:26.440 [2024-11-20 05:22:17.189500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x2a8de67b 00:24:26.440 [2024-11-20 05:22:17.189528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x2a8de67b 00:24:26.440 [2024-11-20 05:22:17.189658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x2a8de67b 00:24:26.440 [2024-11-20 05:22:17.189714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x2a8de67b 00:24:26.440 [2024-11-20 05:22:17.189744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x2a8de67b 00:24:26.440 [2024-11-20 05:22:17.189801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x2a8de67b 00:24:26.440 [2024-11-20 05:22:17.189830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x2a8de67b 00:24:26.440 [2024-11-20 05:22:17.189844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0xa82bb4ef 00:24:26.440 [2024-11-20 05:22:17.189931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.440 [2024-11-20 05:22:17.189939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.440 [2024-11-20 05:22:17.189945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.189953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.189959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.189967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.189973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.189981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.189987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.189995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0xa82bb4ef 00:24:26.441 [2024-11-20 05:22:17.190001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0xa82bb4ef 00:24:26.441 [2024-11-20 05:22:17.190029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0xa82bb4ef 00:24:26.441 [2024-11-20 05:22:17.190162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0xa82bb4ef 00:24:26.441 [2024-11-20 05:22:17.190220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0xa82bb4ef 00:24:26.441 [2024-11-20 05:22:17.190263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0xa82bb4ef 00:24:26.441 [2024-11-20 05:22:17.190320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0xa82bb4ef 00:24:26.441 [2024-11-20 05:22:17.190390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0xa82bb4ef 00:24:26.441 [2024-11-20 05:22:17.190434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x2a8de67b 00:24:26.441 [2024-11-20 05:22:17.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.441 [2024-11-20 05:22:17.190469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.441 [2024-11-20 05:22:17.190476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x2a8de67b 00:24:26.442 [2024-11-20 05:22:17.190908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0xa82bb4ef 00:24:26.442 [2024-11-20 05:22:17.190922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.190944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.442 [2024-11-20 05:22:17.190950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22556f0 sqhd:4980 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.191235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.442 [2024-11-20 05:22:17.191245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.442 [2024-11-20 05:22:17.191251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14248 len:8 PRP1 0x0 PRP2 0x0 00:24:26.442 [2024-11-20 05:22:17.191258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.442 [2024-11-20 05:22:17.191294] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:26.442 [2024-11-20 05:22:17.191303] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:24:26.442 [2024-11-20 05:22:17.191311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.442 [2024-11-20 05:22:17.193205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.443 [2024-11-20 05:22:17.193239] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:26.443 [2024-11-20 05:22:17.206259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.443 [2024-11-20 05:22:17.252771] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:26.443 00:24:26.443 Latency(us) 00:24:26.443 [2024-11-20T04:22:23.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.443 [2024-11-20T04:22:23.271Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:26.443 Verification LBA range: start 0x0 length 0x4000 00:24:26.443 NVMe0n1 : 15.00 22156.03 86.55 453.54 0.00 5648.99 370.59 571224.26 00:24:26.443 [2024-11-20T04:22:23.271Z] =================================================================================================================== 00:24:26.443 [2024-11-20T04:22:23.271Z] Total : 22156.03 86.55 453.54 0.00 5648.99 370.59 571224.26 00:24:26.443 Received shutdown signal, test time was about 15.000000 seconds 00:24:26.443 00:24:26.443 Latency(us) 00:24:26.443 [2024-11-20T04:22:23.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.443 [2024-11-20T04:22:23.271Z] =================================================================================================================== 00:24:26.443 [2024-11-20T04:22:23.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.443 05:22:23 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:26.443 05:22:23 -- host/failover.sh@65 -- # count=3 00:24:26.443 05:22:23 -- host/failover.sh@67 -- # (( count != 3 )) 00:24:26.443 05:22:23 -- host/failover.sh@73 -- # bdevperf_pid=380710 00:24:26.443 05:22:23 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:26.443 05:22:23 -- host/failover.sh@75 -- # waitforlisten 380710 /var/tmp/bdevperf.sock 00:24:26.443 05:22:23 -- common/autotest_common.sh@829 -- # '[' -z 380710 ']' 00:24:26.443 05:22:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.443 05:22:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.443 05:22:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.443 05:22:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.443 05:22:23 -- common/autotest_common.sh@10 -- # set +x 00:24:27.380 05:22:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.380 05:22:23 -- common/autotest_common.sh@862 -- # return 0 00:24:27.380 05:22:23 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:27.380 [2024-11-20 05:22:24.077478] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:27.380 05:22:24 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:27.640 [2024-11-20 05:22:24.262119] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:27.640 05:22:24 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:27.898 NVMe0n1 00:24:27.898 05:22:24 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.157 00:24:28.157 05:22:24 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.416 00:24:28.416 05:22:25 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.416 05:22:25 -- host/failover.sh@82 -- # grep -q NVMe0 00:24:28.416 05:22:25 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.675 05:22:25 -- host/failover.sh@87 -- # sleep 3 00:24:31.966 05:22:28 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:31.966 05:22:28 -- host/failover.sh@88 -- # grep -q NVMe0 00:24:31.966 05:22:28 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.966 05:22:28 -- host/failover.sh@90 -- # run_test_pid=381645 00:24:31.966 05:22:28 -- host/failover.sh@92 -- # wait 381645 00:24:32.919 0 00:24:32.919 05:22:29 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:32.919 [2024-11-20 05:22:23.095867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:32.919 [2024-11-20 05:22:23.095916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380710 ] 00:24:32.919 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.919 [2024-11-20 05:22:23.150744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.919 [2024-11-20 05:22:23.217429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.919 [2024-11-20 05:22:25.380835] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:32.919 [2024-11-20 05:22:25.382120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.919 [2024-11-20 05:22:25.382147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.919 [2024-11-20 05:22:25.403302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:32.919 [2024-11-20 05:22:25.421132] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.919 Running I/O for 1 seconds... 00:24:32.919 00:24:32.919 Latency(us) 00:24:32.919 [2024-11-20T04:22:29.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.919 [2024-11-20T04:22:29.747Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:32.919 Verification LBA range: start 0x0 length 0x4000 00:24:32.920 NVMe0n1 : 1.00 24968.85 97.53 0.00 0.00 5101.57 1022.05 17601.10 00:24:32.920 [2024-11-20T04:22:29.748Z] =================================================================================================================== 00:24:32.920 [2024-11-20T04:22:29.748Z] Total : 24968.85 97.53 0.00 0.00 5101.57 1022.05 17601.10 00:24:32.920 05:22:29 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:32.920 05:22:29 -- host/failover.sh@95 -- # grep -q NVMe0 00:24:33.178 05:22:29 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.437 05:22:30 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:33.437 05:22:30 -- host/failover.sh@99 -- # grep -q NVMe0 00:24:33.696 05:22:30 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.696 05:22:30 -- host/failover.sh@101 -- # sleep 3 00:24:36.986 05:22:33 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.986 05:22:33 -- host/failover.sh@103 -- # grep -q NVMe0 00:24:36.986 05:22:33 -- host/failover.sh@108 -- # killprocess 380710 00:24:36.986 05:22:33 -- common/autotest_common.sh@936 -- # '[' -z 380710 ']' 00:24:36.986 05:22:33 -- common/autotest_common.sh@940 -- # kill -0 380710 00:24:36.986 05:22:33 -- common/autotest_common.sh@941 -- # uname 00:24:36.986 05:22:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:36.986 05:22:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 380710 00:24:36.986 05:22:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:36.986 05:22:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:36.986 05:22:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 380710' 00:24:36.986 killing process with pid 380710 00:24:36.986 05:22:33 -- common/autotest_common.sh@955 -- # kill 380710 00:24:36.986 05:22:33 -- common/autotest_common.sh@960 -- # wait 380710 00:24:37.245 05:22:33 -- host/failover.sh@110 -- # sync 00:24:37.245 05:22:33 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.504 05:22:34 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:37.504 05:22:34 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.504 05:22:34 -- host/failover.sh@116 -- # nvmftestfini 00:24:37.504 05:22:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:37.504 05:22:34 -- nvmf/common.sh@116 -- # sync 00:24:37.504 05:22:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:37.504 05:22:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:37.504 05:22:34 -- nvmf/common.sh@119 -- # set +e 00:24:37.504 05:22:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:37.504 05:22:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:37.504 rmmod nvme_rdma 00:24:37.504 rmmod nvme_fabrics 00:24:37.504 05:22:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:37.504 05:22:34 -- nvmf/common.sh@123 -- # set -e 00:24:37.504 05:22:34 -- nvmf/common.sh@124 -- # return 0 00:24:37.504 05:22:34 -- nvmf/common.sh@477 -- # '[' -n 377674 ']' 00:24:37.504 05:22:34 -- nvmf/common.sh@478 -- # killprocess 377674 00:24:37.504 05:22:34 -- common/autotest_common.sh@936 -- # '[' -z 377674 ']' 00:24:37.504 05:22:34 -- common/autotest_common.sh@940 -- # kill -0 377674 00:24:37.504 05:22:34 -- common/autotest_common.sh@941 -- # uname 00:24:37.504 05:22:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:37.504 05:22:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 377674 00:24:37.504 05:22:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:37.504 05:22:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:37.504 05:22:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 377674' 00:24:37.504 killing process with pid 377674 00:24:37.504 05:22:34 -- common/autotest_common.sh@955 -- # kill 377674 00:24:37.504 05:22:34 -- common/autotest_common.sh@960 -- # wait 377674 00:24:37.763 05:22:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:37.763 05:22:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:37.763 00:24:37.763 real 0m35.915s 00:24:37.763 user 2m4.437s 00:24:37.763 sys 0m5.744s 00:24:37.763 05:22:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:37.763 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 ************************************ 00:24:37.763 END TEST nvmf_failover 00:24:37.763 ************************************ 00:24:37.763 05:22:34 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:37.763 05:22:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:37.763 05:22:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:37.763 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 ************************************ 00:24:37.763 START TEST nvmf_discovery 00:24:37.763 ************************************ 00:24:37.763 05:22:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:37.763 * Looking for test storage... 00:24:37.763 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:24:37.763 05:22:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:37.763 05:22:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:37.763 05:22:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:38.023 05:22:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:38.023 05:22:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:38.023 05:22:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:38.023 05:22:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:38.023 05:22:34 -- scripts/common.sh@335 -- # IFS=.-: 00:24:38.023 05:22:34 -- scripts/common.sh@335 -- # read -ra ver1 00:24:38.023 05:22:34 -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.023 05:22:34 -- scripts/common.sh@336 -- # read -ra ver2 00:24:38.023 05:22:34 -- scripts/common.sh@337 -- # local 'op=<' 00:24:38.023 05:22:34 -- scripts/common.sh@339 -- # ver1_l=2 00:24:38.023 05:22:34 -- scripts/common.sh@340 -- # ver2_l=1 00:24:38.023 05:22:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:38.023 05:22:34 -- scripts/common.sh@343 -- # case "$op" in 00:24:38.023 05:22:34 -- scripts/common.sh@344 -- # : 1 00:24:38.023 05:22:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:38.024 05:22:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.024 05:22:34 -- scripts/common.sh@364 -- # decimal 1 00:24:38.024 05:22:34 -- scripts/common.sh@352 -- # local d=1 00:24:38.024 05:22:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.024 05:22:34 -- scripts/common.sh@354 -- # echo 1 00:24:38.024 05:22:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:38.024 05:22:34 -- scripts/common.sh@365 -- # decimal 2 00:24:38.024 05:22:34 -- scripts/common.sh@352 -- # local d=2 00:24:38.024 05:22:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.024 05:22:34 -- scripts/common.sh@354 -- # echo 2 00:24:38.024 05:22:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:38.024 05:22:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:38.024 05:22:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:38.024 05:22:34 -- scripts/common.sh@367 -- # return 0 00:24:38.024 05:22:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.024 05:22:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:38.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.024 --rc genhtml_branch_coverage=1 00:24:38.024 --rc genhtml_function_coverage=1 00:24:38.024 --rc genhtml_legend=1 00:24:38.024 --rc geninfo_all_blocks=1 00:24:38.024 --rc geninfo_unexecuted_blocks=1 00:24:38.024 00:24:38.024 ' 00:24:38.024 05:22:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:38.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.024 --rc genhtml_branch_coverage=1 00:24:38.024 --rc genhtml_function_coverage=1 00:24:38.024 --rc genhtml_legend=1 00:24:38.024 --rc geninfo_all_blocks=1 00:24:38.024 --rc geninfo_unexecuted_blocks=1 00:24:38.024 00:24:38.024 ' 00:24:38.024 05:22:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:38.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.024 --rc genhtml_branch_coverage=1 00:24:38.024 --rc genhtml_function_coverage=1 00:24:38.024 --rc genhtml_legend=1 00:24:38.024 --rc geninfo_all_blocks=1 00:24:38.024 --rc geninfo_unexecuted_blocks=1 00:24:38.024 00:24:38.024 ' 00:24:38.024 05:22:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:38.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.024 --rc genhtml_branch_coverage=1 00:24:38.024 --rc genhtml_function_coverage=1 00:24:38.024 --rc genhtml_legend=1 00:24:38.024 --rc geninfo_all_blocks=1 00:24:38.024 --rc geninfo_unexecuted_blocks=1 00:24:38.024 00:24:38.024 ' 00:24:38.024 05:22:34 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.024 05:22:34 -- nvmf/common.sh@7 -- # uname -s 00:24:38.024 05:22:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.024 05:22:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.024 05:22:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.024 05:22:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.024 05:22:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.024 05:22:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.024 05:22:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.024 05:22:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.024 05:22:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.024 05:22:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.024 05:22:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:38.024 05:22:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:38.024 05:22:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.024 05:22:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.024 05:22:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:38.024 05:22:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:24:38.024 05:22:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.024 05:22:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.024 05:22:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.024 05:22:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.024 05:22:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.024 05:22:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.024 05:22:34 -- paths/export.sh@5 -- # export PATH 00:24:38.024 05:22:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.024 05:22:34 -- nvmf/common.sh@46 -- # : 0 00:24:38.024 05:22:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:38.024 05:22:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:38.024 05:22:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:38.024 05:22:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.024 05:22:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.024 05:22:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:38.024 05:22:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:38.024 05:22:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:38.024 05:22:34 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:24:38.024 05:22:34 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:38.024 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:38.024 05:22:34 -- host/discovery.sh@13 -- # exit 0 00:24:38.024 00:24:38.024 real 0m0.173s 00:24:38.024 user 0m0.104s 00:24:38.024 sys 0m0.079s 00:24:38.024 05:22:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:38.024 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:24:38.024 ************************************ 00:24:38.024 END TEST nvmf_discovery 00:24:38.024 ************************************ 00:24:38.024 05:22:34 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:38.024 05:22:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:38.024 05:22:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:38.024 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:24:38.024 ************************************ 00:24:38.024 START TEST nvmf_discovery_remove_ifc 00:24:38.024 ************************************ 00:24:38.024 05:22:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:38.024 * Looking for test storage... 00:24:38.024 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:24:38.024 05:22:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:38.024 05:22:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:38.024 05:22:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:38.284 05:22:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:38.284 05:22:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:38.284 05:22:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:38.284 05:22:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:38.284 05:22:34 -- scripts/common.sh@335 -- # IFS=.-: 00:24:38.284 05:22:34 -- scripts/common.sh@335 -- # read -ra ver1 00:24:38.284 05:22:34 -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.284 05:22:34 -- scripts/common.sh@336 -- # read -ra ver2 00:24:38.284 05:22:34 -- scripts/common.sh@337 -- # local 'op=<' 00:24:38.284 05:22:34 -- scripts/common.sh@339 -- # ver1_l=2 00:24:38.284 05:22:34 -- scripts/common.sh@340 -- # ver2_l=1 00:24:38.284 05:22:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:38.284 05:22:34 -- scripts/common.sh@343 -- # case "$op" in 00:24:38.284 05:22:34 -- scripts/common.sh@344 -- # : 1 00:24:38.284 05:22:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:38.284 05:22:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.284 05:22:34 -- scripts/common.sh@364 -- # decimal 1 00:24:38.284 05:22:34 -- scripts/common.sh@352 -- # local d=1 00:24:38.284 05:22:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.284 05:22:34 -- scripts/common.sh@354 -- # echo 1 00:24:38.284 05:22:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:38.284 05:22:34 -- scripts/common.sh@365 -- # decimal 2 00:24:38.284 05:22:34 -- scripts/common.sh@352 -- # local d=2 00:24:38.284 05:22:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.284 05:22:34 -- scripts/common.sh@354 -- # echo 2 00:24:38.284 05:22:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:38.284 05:22:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:38.284 05:22:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:38.284 05:22:34 -- scripts/common.sh@367 -- # return 0 00:24:38.284 05:22:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.284 05:22:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:38.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.284 --rc genhtml_branch_coverage=1 00:24:38.284 --rc genhtml_function_coverage=1 00:24:38.284 --rc genhtml_legend=1 00:24:38.284 --rc geninfo_all_blocks=1 00:24:38.284 --rc geninfo_unexecuted_blocks=1 00:24:38.284 00:24:38.284 ' 00:24:38.284 05:22:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:38.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.284 --rc genhtml_branch_coverage=1 00:24:38.284 --rc genhtml_function_coverage=1 00:24:38.284 --rc genhtml_legend=1 00:24:38.284 --rc geninfo_all_blocks=1 00:24:38.284 --rc geninfo_unexecuted_blocks=1 00:24:38.285 00:24:38.285 ' 00:24:38.285 05:22:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:38.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.285 --rc genhtml_branch_coverage=1 00:24:38.285 --rc genhtml_function_coverage=1 00:24:38.285 --rc genhtml_legend=1 00:24:38.285 --rc geninfo_all_blocks=1 00:24:38.285 --rc geninfo_unexecuted_blocks=1 00:24:38.285 00:24:38.285 ' 00:24:38.285 05:22:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:38.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.285 --rc genhtml_branch_coverage=1 00:24:38.285 --rc genhtml_function_coverage=1 00:24:38.285 --rc genhtml_legend=1 00:24:38.285 --rc geninfo_all_blocks=1 00:24:38.285 --rc geninfo_unexecuted_blocks=1 00:24:38.285 00:24:38.285 ' 00:24:38.285 05:22:34 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.285 05:22:34 -- nvmf/common.sh@7 -- # uname -s 00:24:38.285 05:22:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.285 05:22:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.285 05:22:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.285 05:22:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.285 05:22:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.285 05:22:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.285 05:22:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.285 05:22:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.285 05:22:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.285 05:22:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.285 05:22:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:38.285 05:22:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:38.285 05:22:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.285 05:22:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.285 05:22:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:38.285 05:22:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:24:38.285 05:22:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.285 05:22:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.285 05:22:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.285 05:22:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.285 05:22:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.285 05:22:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.285 05:22:34 -- paths/export.sh@5 -- # export PATH 00:24:38.285 05:22:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.285 05:22:34 -- nvmf/common.sh@46 -- # : 0 00:24:38.285 05:22:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:38.285 05:22:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:38.285 05:22:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:38.285 05:22:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.285 05:22:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.285 05:22:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:38.285 05:22:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:38.285 05:22:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:38.285 05:22:34 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:24:38.285 05:22:34 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:38.285 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:38.285 05:22:34 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:24:38.285 00:24:38.285 real 0m0.188s 00:24:38.285 user 0m0.120s 00:24:38.285 sys 0m0.078s 00:24:38.285 05:22:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:38.285 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:24:38.285 ************************************ 00:24:38.285 END TEST nvmf_discovery_remove_ifc 00:24:38.285 ************************************ 00:24:38.285 05:22:34 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:24:38.285 05:22:34 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:24:38.285 05:22:34 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:24:38.285 05:22:34 -- nvmf/nvmf.sh@120 -- # [[ phy-fallback == phy ]] 00:24:38.285 05:22:34 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:38.285 05:22:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.285 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:24:38.285 05:22:34 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:38.285 00:24:38.285 real 17m25.294s 00:24:38.285 user 56m40.330s 00:24:38.285 sys 3m36.732s 00:24:38.285 05:22:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:38.285 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:24:38.285 ************************************ 00:24:38.285 END TEST nvmf_rdma 00:24:38.285 ************************************ 00:24:38.285 05:22:34 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:38.285 05:22:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:38.285 05:22:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:38.285 05:22:34 -- common/autotest_common.sh@10 -- # set +x 00:24:38.285 ************************************ 00:24:38.285 START TEST spdkcli_nvmf_rdma 00:24:38.285 ************************************ 00:24:38.285 05:22:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:38.285 * Looking for test storage... 00:24:38.285 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:24:38.285 05:22:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:38.285 05:22:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:38.285 05:22:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:38.545 05:22:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:38.545 05:22:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:38.545 05:22:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:38.545 05:22:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:38.546 05:22:35 -- scripts/common.sh@335 -- # IFS=.-: 00:24:38.546 05:22:35 -- scripts/common.sh@335 -- # read -ra ver1 00:24:38.546 05:22:35 -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.546 05:22:35 -- scripts/common.sh@336 -- # read -ra ver2 00:24:38.546 05:22:35 -- scripts/common.sh@337 -- # local 'op=<' 00:24:38.546 05:22:35 -- scripts/common.sh@339 -- # ver1_l=2 00:24:38.546 05:22:35 -- scripts/common.sh@340 -- # ver2_l=1 00:24:38.546 05:22:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:38.546 05:22:35 -- scripts/common.sh@343 -- # case "$op" in 00:24:38.546 05:22:35 -- scripts/common.sh@344 -- # : 1 00:24:38.546 05:22:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:38.546 05:22:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.546 05:22:35 -- scripts/common.sh@364 -- # decimal 1 00:24:38.546 05:22:35 -- scripts/common.sh@352 -- # local d=1 00:24:38.546 05:22:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.546 05:22:35 -- scripts/common.sh@354 -- # echo 1 00:24:38.546 05:22:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:38.546 05:22:35 -- scripts/common.sh@365 -- # decimal 2 00:24:38.546 05:22:35 -- scripts/common.sh@352 -- # local d=2 00:24:38.546 05:22:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.546 05:22:35 -- scripts/common.sh@354 -- # echo 2 00:24:38.546 05:22:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:38.546 05:22:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:38.546 05:22:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:38.546 05:22:35 -- scripts/common.sh@367 -- # return 0 00:24:38.546 05:22:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.546 05:22:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:38.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.546 --rc genhtml_branch_coverage=1 00:24:38.546 --rc genhtml_function_coverage=1 00:24:38.546 --rc genhtml_legend=1 00:24:38.546 --rc geninfo_all_blocks=1 00:24:38.546 --rc geninfo_unexecuted_blocks=1 00:24:38.546 00:24:38.546 ' 00:24:38.546 05:22:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:38.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.546 --rc genhtml_branch_coverage=1 00:24:38.546 --rc genhtml_function_coverage=1 00:24:38.546 --rc genhtml_legend=1 00:24:38.546 --rc geninfo_all_blocks=1 00:24:38.546 --rc geninfo_unexecuted_blocks=1 00:24:38.546 00:24:38.546 ' 00:24:38.546 05:22:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:38.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.546 --rc genhtml_branch_coverage=1 00:24:38.546 --rc genhtml_function_coverage=1 00:24:38.546 --rc genhtml_legend=1 00:24:38.546 --rc geninfo_all_blocks=1 00:24:38.546 --rc geninfo_unexecuted_blocks=1 00:24:38.546 00:24:38.546 ' 00:24:38.546 05:22:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:38.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.546 --rc genhtml_branch_coverage=1 00:24:38.546 --rc genhtml_function_coverage=1 00:24:38.546 --rc genhtml_legend=1 00:24:38.546 --rc geninfo_all_blocks=1 00:24:38.546 --rc geninfo_unexecuted_blocks=1 00:24:38.546 00:24:38.546 ' 00:24:38.546 05:22:35 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:24:38.546 05:22:35 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:38.546 05:22:35 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:24:38.546 05:22:35 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.546 05:22:35 -- nvmf/common.sh@7 -- # uname -s 00:24:38.546 05:22:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.546 05:22:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.546 05:22:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.546 05:22:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.546 05:22:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.546 05:22:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.546 05:22:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.546 05:22:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.546 05:22:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.546 05:22:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.546 05:22:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:38.546 05:22:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:38.546 05:22:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.546 05:22:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.546 05:22:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:38.546 05:22:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:24:38.546 05:22:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.546 05:22:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.546 05:22:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.546 05:22:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.546 05:22:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.546 05:22:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.546 05:22:35 -- paths/export.sh@5 -- # export PATH 00:24:38.546 05:22:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.546 05:22:35 -- nvmf/common.sh@46 -- # : 0 00:24:38.546 05:22:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:38.546 05:22:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:38.546 05:22:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:38.546 05:22:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.546 05:22:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.546 05:22:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:38.546 05:22:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:38.546 05:22:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:38.546 05:22:35 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:38.546 05:22:35 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:38.546 05:22:35 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:38.546 05:22:35 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:38.546 05:22:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.546 05:22:35 -- common/autotest_common.sh@10 -- # set +x 00:24:38.546 05:22:35 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:38.546 05:22:35 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=382995 00:24:38.546 05:22:35 -- spdkcli/common.sh@34 -- # waitforlisten 382995 00:24:38.546 05:22:35 -- common/autotest_common.sh@829 -- # '[' -z 382995 ']' 00:24:38.546 05:22:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.546 05:22:35 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:38.546 05:22:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.546 05:22:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.546 05:22:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.546 05:22:35 -- common/autotest_common.sh@10 -- # set +x 00:24:38.546 [2024-11-20 05:22:35.230896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:38.546 [2024-11-20 05:22:35.230945] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382995 ] 00:24:38.546 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.546 [2024-11-20 05:22:35.284557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:38.546 [2024-11-20 05:22:35.352317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:38.546 [2024-11-20 05:22:35.352472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.546 [2024-11-20 05:22:35.352474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.485 05:22:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.485 05:22:36 -- common/autotest_common.sh@862 -- # return 0 00:24:39.485 05:22:36 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:39.485 05:22:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:39.485 05:22:36 -- common/autotest_common.sh@10 -- # set +x 00:24:39.485 05:22:36 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:39.485 05:22:36 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:39.485 05:22:36 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:39.485 05:22:36 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:39.485 05:22:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.485 05:22:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:39.485 05:22:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:39.485 05:22:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:39.485 05:22:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.485 05:22:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:39.485 05:22:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.485 05:22:36 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:39.485 05:22:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:39.485 05:22:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:39.485 05:22:36 -- common/autotest_common.sh@10 -- # set +x 00:24:44.761 05:22:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:44.761 05:22:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:44.761 05:22:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:44.761 05:22:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:44.761 05:22:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:44.761 05:22:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:44.761 05:22:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:44.761 05:22:41 -- nvmf/common.sh@294 -- # net_devs=() 00:24:44.761 05:22:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:44.761 05:22:41 -- nvmf/common.sh@295 -- # e810=() 00:24:44.761 05:22:41 -- nvmf/common.sh@295 -- # local -ga e810 00:24:44.761 05:22:41 -- nvmf/common.sh@296 -- # x722=() 00:24:44.761 05:22:41 -- nvmf/common.sh@296 -- # local -ga x722 00:24:44.761 05:22:41 -- nvmf/common.sh@297 -- # mlx=() 00:24:44.761 05:22:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:44.761 05:22:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.761 05:22:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:44.761 05:22:41 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:44.761 05:22:41 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:44.761 05:22:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:44.761 05:22:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:44.761 05:22:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:44.761 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:44.761 05:22:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:44.761 05:22:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:44.761 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:44.761 05:22:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:44.761 05:22:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:44.761 05:22:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@371 -- # [[ rdma == rdma ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@372 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@374 -- # (( 1 != 1 )) 00:24:44.761 05:22:41 -- nvmf/common.sh@376 -- # modinfo irdma 00:24:44.761 05:22:41 -- nvmf/common.sh@376 -- # modprobe irdma roce_ena=1 00:24:44.761 05:22:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.761 05:22:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:44.761 05:22:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.761 05:22:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:44.761 Found net devices under 0000:af:00.0: cvl_0_0 00:24:44.761 05:22:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.761 05:22:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.761 05:22:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:44.761 05:22:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.761 05:22:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:44.761 Found net devices under 0000:af:00.1: cvl_0_1 00:24:44.761 05:22:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.761 05:22:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:44.761 05:22:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:44.761 05:22:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:44.761 05:22:41 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:44.761 05:22:41 -- nvmf/common.sh@57 -- # uname 00:24:44.761 05:22:41 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:44.761 05:22:41 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:44.761 05:22:41 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:44.761 05:22:41 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:44.761 05:22:41 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:44.761 05:22:41 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:44.761 05:22:41 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:44.761 05:22:41 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:44.761 05:22:41 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:44.761 05:22:41 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:44.761 05:22:41 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:44.761 05:22:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:44.761 05:22:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:44.761 05:22:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:44.761 05:22:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:44.761 05:22:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:44.761 05:22:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:24:44.761 05:22:41 -- nvmf/common.sh@104 -- # continue 2 00:24:44.761 05:22:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:44.761 05:22:41 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:24:44.761 05:22:41 -- nvmf/common.sh@104 -- # continue 2 00:24:44.761 05:22:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:44.761 05:22:41 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_0 00:24:44.761 05:22:41 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:24:44.761 05:22:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:24:44.761 05:22:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:44.761 05:22:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:44.761 05:22:41 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:44.761 05:22:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@80 -- # ip addr show cvl_0_0 00:24:44.761 4: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:44.761 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:24:44.761 altname enp175s0f0np0 00:24:44.761 altname ens801f0np0 00:24:44.761 inet 192.168.100.8/24 scope global cvl_0_0 00:24:44.761 valid_lft forever preferred_lft forever 00:24:44.761 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:24:44.761 valid_lft forever preferred_lft forever 00:24:44.761 05:22:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:44.761 05:22:41 -- nvmf/common.sh@73 -- # get_ip_address cvl_0_1 00:24:44.761 05:22:41 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:24:44.761 05:22:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:24:44.761 05:22:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:44.761 05:22:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:44.761 05:22:41 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:44.761 05:22:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:44.761 05:22:41 -- nvmf/common.sh@80 -- # ip addr show cvl_0_1 00:24:44.761 5: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:44.761 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:24:44.761 altname enp175s0f1np1 00:24:44.761 altname ens801f1np1 00:24:44.761 inet 192.168.100.9/24 scope global cvl_0_1 00:24:44.761 valid_lft forever preferred_lft forever 00:24:44.761 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:24:44.761 valid_lft forever preferred_lft forever 00:24:44.761 05:22:41 -- nvmf/common.sh@410 -- # return 0 00:24:44.761 05:22:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:44.762 05:22:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:44.762 05:22:41 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:44.762 05:22:41 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:44.762 05:22:41 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:44.762 05:22:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:44.762 05:22:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:44.762 05:22:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:44.762 05:22:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:44.762 05:22:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:44.762 05:22:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:44.762 05:22:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:44.762 05:22:41 -- nvmf/common.sh@102 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:44.762 05:22:41 -- nvmf/common.sh@103 -- # echo cvl_0_0 00:24:44.762 05:22:41 -- nvmf/common.sh@104 -- # continue 2 00:24:44.762 05:22:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:44.762 05:22:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:44.762 05:22:41 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:44.762 05:22:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:44.762 05:22:41 -- nvmf/common.sh@102 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:44.762 05:22:41 -- nvmf/common.sh@103 -- # echo cvl_0_1 00:24:44.762 05:22:41 -- nvmf/common.sh@104 -- # continue 2 00:24:44.762 05:22:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:44.762 05:22:41 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_0 00:24:44.762 05:22:41 -- nvmf/common.sh@111 -- # interface=cvl_0_0 00:24:44.762 05:22:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_0 00:24:44.762 05:22:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:44.762 05:22:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:44.762 05:22:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:44.762 05:22:41 -- nvmf/common.sh@86 -- # get_ip_address cvl_0_1 00:24:44.762 05:22:41 -- nvmf/common.sh@111 -- # interface=cvl_0_1 00:24:44.762 05:22:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show cvl_0_1 00:24:44.762 05:22:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:44.762 05:22:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:44.762 05:22:41 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:44.762 192.168.100.9' 00:24:44.762 05:22:41 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:44.762 192.168.100.9' 00:24:44.762 05:22:41 -- nvmf/common.sh@445 -- # head -n 1 00:24:44.762 05:22:41 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:44.762 05:22:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:44.762 192.168.100.9' 00:24:44.762 05:22:41 -- nvmf/common.sh@446 -- # tail -n +2 00:24:44.762 05:22:41 -- nvmf/common.sh@446 -- # head -n 1 00:24:44.762 05:22:41 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:44.762 05:22:41 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:44.762 05:22:41 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:44.762 05:22:41 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:44.762 05:22:41 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:44.762 05:22:41 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:44.762 05:22:41 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:44.762 05:22:41 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:44.762 05:22:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.762 05:22:41 -- common/autotest_common.sh@10 -- # set +x 00:24:45.022 05:22:41 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:45.022 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:45.022 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:45.022 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:45.022 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:45.022 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:45.022 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:45.022 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:45.022 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:45.022 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:45.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:45.022 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:45.022 ' 00:24:45.281 [2024-11-20 05:22:41.991131] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:47.817 [2024-11-20 05:22:44.059029] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1e24660/0x1e23ca0) succeed. 00:24:47.817 [2024-11-20 05:22:44.068509] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1e25b50/0x1e24220) succeed. 00:24:47.817 [2024-11-20 05:22:44.068530] rdma.c:2845:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:24:47.817 [2024-11-20 05:22:44.070464] iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:24:47.817 [2024-11-20 05:22:44.070488] iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:24:47.817 [2024-11-20 05:22:44.071653] transport.c: 625:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:24:47.817 [2024-11-20 05:22:44.073267] iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:24:47.817 [2024-11-20 05:22:44.073279] iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:24:47.817 [2024-11-20 05:22:44.074519] transport.c: 625:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:24:48.755 [2024-11-20 05:22:45.258589] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:24:50.888 [2024-11-20 05:22:47.449926] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:24:52.879 [2024-11-20 05:22:49.332260] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:24:54.486 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:54.486 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:54.486 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:54.486 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:54.486 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:54.486 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:54.486 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:54.486 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:54.486 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:54.486 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:54.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:54.486 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:54.486 05:22:50 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:54.486 05:22:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.486 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:24:54.486 05:22:50 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:54.486 05:22:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.486 05:22:50 -- common/autotest_common.sh@10 -- # set +x 00:24:54.486 05:22:50 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:54.486 05:22:50 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:54.774 05:22:51 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:54.774 05:22:51 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:54.774 05:22:51 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:54.774 05:22:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.774 05:22:51 -- common/autotest_common.sh@10 -- # set +x 00:24:54.774 05:22:51 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:54.774 05:22:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.774 05:22:51 -- common/autotest_common.sh@10 -- # set +x 00:24:54.774 05:22:51 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:54.774 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:54.774 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:54.774 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:54.774 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:24:54.774 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:24:54.774 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:54.774 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:54.774 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:54.774 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:54.774 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:54.774 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:54.774 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:54.774 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:54.774 ' 00:25:00.046 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:00.046 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:00.046 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:00.046 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:00.046 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:25:00.046 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:25:00.046 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:00.046 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:00.046 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:00.046 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:00.046 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:00.047 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:00.047 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:00.047 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:00.047 05:22:56 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:00.047 05:22:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:00.047 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.047 05:22:56 -- spdkcli/nvmf.sh@90 -- # killprocess 382995 00:25:00.047 05:22:56 -- common/autotest_common.sh@936 -- # '[' -z 382995 ']' 00:25:00.047 05:22:56 -- common/autotest_common.sh@940 -- # kill -0 382995 00:25:00.047 05:22:56 -- common/autotest_common.sh@941 -- # uname 00:25:00.047 05:22:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.047 05:22:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 382995 00:25:00.047 05:22:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:00.047 05:22:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:00.047 05:22:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 382995' 00:25:00.047 killing process with pid 382995 00:25:00.047 05:22:56 -- common/autotest_common.sh@955 -- # kill 382995 00:25:00.047 [2024-11-20 05:22:56.463795] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:00.047 05:22:56 -- common/autotest_common.sh@960 -- # wait 382995 00:25:00.047 05:22:56 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:25:00.047 05:22:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:00.047 05:22:56 -- nvmf/common.sh@116 -- # sync 00:25:00.047 05:22:56 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:00.047 05:22:56 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:00.047 05:22:56 -- nvmf/common.sh@119 -- # set +e 00:25:00.047 05:22:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:00.047 05:22:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:00.047 rmmod nvme_rdma 00:25:00.047 rmmod nvme_fabrics 00:25:00.047 05:22:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:00.047 05:22:56 -- nvmf/common.sh@123 -- # set -e 00:25:00.047 05:22:56 -- nvmf/common.sh@124 -- # return 0 00:25:00.047 05:22:56 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:00.047 05:22:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:00.047 05:22:56 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:00.047 00:25:00.047 real 0m21.751s 00:25:00.047 user 0m46.185s 00:25:00.047 sys 0m4.854s 00:25:00.047 05:22:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:00.047 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.047 ************************************ 00:25:00.047 END TEST spdkcli_nvmf_rdma 00:25:00.047 ************************************ 00:25:00.047 05:22:56 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:00.047 05:22:56 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:25:00.047 05:22:56 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:25:00.047 05:22:56 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:25:00.047 05:22:56 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:25:00.047 05:22:56 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:25:00.047 05:22:56 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:25:00.047 05:22:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.047 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:25:00.047 05:22:56 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:25:00.047 05:22:56 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:25:00.047 05:22:56 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:25:00.047 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:25:04.240 INFO: APP EXITING 00:25:04.240 INFO: killing all VMs 00:25:04.240 INFO: killing vhost app 00:25:04.240 INFO: EXIT DONE 00:25:06.776 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:06.776 Waiting for block devices as requested 00:25:07.035 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:07.035 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:07.035 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:07.294 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:07.294 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:07.294 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:07.294 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:07.554 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:07.554 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:07.554 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:07.812 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:07.812 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:07.812 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:07.812 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:08.072 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:08.072 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:08.072 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:11.362 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:11.362 Cleaning 00:25:11.362 Removing: /var/run/dpdk/spdk0/config 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:11.362 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:11.362 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:11.362 Removing: /var/run/dpdk/spdk1/config 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:11.362 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:11.362 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:11.362 Removing: /var/run/dpdk/spdk2/config 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:11.362 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:11.362 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:11.362 Removing: /var/run/dpdk/spdk3/config 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:11.362 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:11.362 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:11.362 Removing: /var/run/dpdk/spdk4/config 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:11.362 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:11.362 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:11.362 Removing: /dev/shm/nvmf_trace.0 00:25:11.362 Removing: /dev/shm/spdk_tgt_trace.pid100949 00:25:11.362 Removing: /var/run/dpdk/spdk0 00:25:11.362 Removing: /var/run/dpdk/spdk1 00:25:11.362 Removing: /var/run/dpdk/spdk2 00:25:11.362 Removing: /var/run/dpdk/spdk3 00:25:11.362 Removing: /var/run/dpdk/spdk4 00:25:11.362 Removing: /var/run/dpdk/spdk_pid100949 00:25:11.362 Removing: /var/run/dpdk/spdk_pid101632 00:25:11.362 Removing: /var/run/dpdk/spdk_pid106988 00:25:11.362 Removing: /var/run/dpdk/spdk_pid108275 00:25:11.362 Removing: /var/run/dpdk/spdk_pid108567 00:25:11.362 Removing: /var/run/dpdk/spdk_pid108859 00:25:11.362 Removing: /var/run/dpdk/spdk_pid109281 00:25:11.362 Removing: /var/run/dpdk/spdk_pid109678 00:25:11.362 Removing: /var/run/dpdk/spdk_pid109892 00:25:11.362 Removing: /var/run/dpdk/spdk_pid110081 00:25:11.362 Removing: /var/run/dpdk/spdk_pid110409 00:25:11.362 Removing: /var/run/dpdk/spdk_pid111213 00:25:11.362 Removing: /var/run/dpdk/spdk_pid114225 00:25:11.362 Removing: /var/run/dpdk/spdk_pid114568 00:25:11.362 Removing: /var/run/dpdk/spdk_pid114963 00:25:11.362 Removing: /var/run/dpdk/spdk_pid114969 00:25:11.362 Removing: /var/run/dpdk/spdk_pid115466 00:25:11.362 Removing: /var/run/dpdk/spdk_pid115696 00:25:11.362 Removing: /var/run/dpdk/spdk_pid116083 00:25:11.362 Removing: /var/run/dpdk/spdk_pid116200 00:25:11.362 Removing: /var/run/dpdk/spdk_pid116458 00:25:11.362 Removing: /var/run/dpdk/spdk_pid116688 00:25:11.362 Removing: /var/run/dpdk/spdk_pid116854 00:25:11.362 Removing: /var/run/dpdk/spdk_pid116962 00:25:11.362 Removing: /var/run/dpdk/spdk_pid117518 00:25:11.362 Removing: /var/run/dpdk/spdk_pid117768 00:25:11.362 Removing: /var/run/dpdk/spdk_pid118065 00:25:11.362 Removing: /var/run/dpdk/spdk_pid118328 00:25:11.362 Removing: /var/run/dpdk/spdk_pid118360 00:25:11.362 Removing: /var/run/dpdk/spdk_pid118566 00:25:11.362 Removing: /var/run/dpdk/spdk_pid118770 00:25:11.362 Removing: /var/run/dpdk/spdk_pid119036 00:25:11.362 Removing: /var/run/dpdk/spdk_pid119258 00:25:11.362 Removing: /var/run/dpdk/spdk_pid119516 00:25:11.362 Removing: /var/run/dpdk/spdk_pid119735 00:25:11.362 Removing: /var/run/dpdk/spdk_pid120025 00:25:11.362 Removing: /var/run/dpdk/spdk_pid120262 00:25:11.362 Removing: /var/run/dpdk/spdk_pid120525 00:25:11.362 Removing: /var/run/dpdk/spdk_pid120739 00:25:11.362 Removing: /var/run/dpdk/spdk_pid120992 00:25:11.362 Removing: /var/run/dpdk/spdk_pid121211 00:25:11.362 Removing: /var/run/dpdk/spdk_pid121450 00:25:11.362 Removing: /var/run/dpdk/spdk_pid121671 00:25:11.622 Removing: /var/run/dpdk/spdk_pid121921 00:25:11.622 Removing: /var/run/dpdk/spdk_pid122146 00:25:11.622 Removing: /var/run/dpdk/spdk_pid122384 00:25:11.622 Removing: /var/run/dpdk/spdk_pid122625 00:25:11.622 Removing: /var/run/dpdk/spdk_pid122864 00:25:11.622 Removing: /var/run/dpdk/spdk_pid123081 00:25:11.622 Removing: /var/run/dpdk/spdk_pid123327 00:25:11.622 Removing: /var/run/dpdk/spdk_pid123537 00:25:11.622 Removing: /var/run/dpdk/spdk_pid123775 00:25:11.622 Removing: /var/run/dpdk/spdk_pid123992 00:25:11.622 Removing: /var/run/dpdk/spdk_pid124244 00:25:11.622 Removing: /var/run/dpdk/spdk_pid124477 00:25:11.622 Removing: /var/run/dpdk/spdk_pid124725 00:25:11.622 Removing: /var/run/dpdk/spdk_pid124944 00:25:11.622 Removing: /var/run/dpdk/spdk_pid125198 00:25:11.622 Removing: /var/run/dpdk/spdk_pid125438 00:25:11.622 Removing: /var/run/dpdk/spdk_pid125685 00:25:11.622 Removing: /var/run/dpdk/spdk_pid125909 00:25:11.622 Removing: /var/run/dpdk/spdk_pid126162 00:25:11.622 Removing: /var/run/dpdk/spdk_pid126390 00:25:11.622 Removing: /var/run/dpdk/spdk_pid126642 00:25:11.622 Removing: /var/run/dpdk/spdk_pid126880 00:25:11.622 Removing: /var/run/dpdk/spdk_pid127137 00:25:11.622 Removing: /var/run/dpdk/spdk_pid127363 00:25:11.622 Removing: /var/run/dpdk/spdk_pid127615 00:25:11.622 Removing: /var/run/dpdk/spdk_pid127853 00:25:11.622 Removing: /var/run/dpdk/spdk_pid128115 00:25:11.622 Removing: /var/run/dpdk/spdk_pid128381 00:25:11.622 Removing: /var/run/dpdk/spdk_pid128693 00:25:11.622 Removing: /var/run/dpdk/spdk_pid132365 00:25:11.622 Removing: /var/run/dpdk/spdk_pid216097 00:25:11.622 Removing: /var/run/dpdk/spdk_pid220034 00:25:11.622 Removing: /var/run/dpdk/spdk_pid230359 00:25:11.622 Removing: /var/run/dpdk/spdk_pid235297 00:25:11.622 Removing: /var/run/dpdk/spdk_pid238823 00:25:11.622 Removing: /var/run/dpdk/spdk_pid239745 00:25:11.622 Removing: /var/run/dpdk/spdk_pid248067 00:25:11.622 Removing: /var/run/dpdk/spdk_pid248508 00:25:11.622 Removing: /var/run/dpdk/spdk_pid252311 00:25:11.622 Removing: /var/run/dpdk/spdk_pid257938 00:25:11.622 Removing: /var/run/dpdk/spdk_pid260536 00:25:11.622 Removing: /var/run/dpdk/spdk_pid270032 00:25:11.622 Removing: /var/run/dpdk/spdk_pid293536 00:25:11.622 Removing: /var/run/dpdk/spdk_pid296953 00:25:11.622 Removing: /var/run/dpdk/spdk_pid301908 00:25:11.622 Removing: /var/run/dpdk/spdk_pid332784 00:25:11.622 Removing: /var/run/dpdk/spdk_pid339436 00:25:11.622 Removing: /var/run/dpdk/spdk_pid340320 00:25:11.622 Removing: /var/run/dpdk/spdk_pid341146 00:25:11.622 Removing: /var/run/dpdk/spdk_pid342070 00:25:11.622 Removing: /var/run/dpdk/spdk_pid342519 00:25:11.622 Removing: /var/run/dpdk/spdk_pid346733 00:25:11.622 Removing: /var/run/dpdk/spdk_pid346739 00:25:11.622 Removing: /var/run/dpdk/spdk_pid350987 00:25:11.622 Removing: /var/run/dpdk/spdk_pid351459 00:25:11.622 Removing: /var/run/dpdk/spdk_pid352138 00:25:11.622 Removing: /var/run/dpdk/spdk_pid352144 00:25:11.622 Removing: /var/run/dpdk/spdk_pid353782 00:25:11.622 Removing: /var/run/dpdk/spdk_pid355617 00:25:11.622 Removing: /var/run/dpdk/spdk_pid357451 00:25:11.622 Removing: /var/run/dpdk/spdk_pid359285 00:25:11.622 Removing: /var/run/dpdk/spdk_pid361118 00:25:11.622 Removing: /var/run/dpdk/spdk_pid363274 00:25:11.622 Removing: /var/run/dpdk/spdk_pid369306 00:25:11.622 Removing: /var/run/dpdk/spdk_pid369842 00:25:11.622 Removing: /var/run/dpdk/spdk_pid371635 00:25:11.622 Removing: /var/run/dpdk/spdk_pid372479 00:25:11.622 Removing: /var/run/dpdk/spdk_pid377942 00:25:11.622 Removing: /var/run/dpdk/spdk_pid380710 00:25:11.622 Removing: /var/run/dpdk/spdk_pid382995 00:25:11.622 Removing: /var/run/dpdk/spdk_pid98673 00:25:11.882 Removing: /var/run/dpdk/spdk_pid99880 00:25:11.882 Clean 00:25:11.882 killing process with pid 52159 00:25:18.454 killing process with pid 52156 00:25:18.454 killing process with pid 52158 00:25:18.454 killing process with pid 52157 00:25:18.454 05:23:14 -- common/autotest_common.sh@1446 -- # return 0 00:25:18.454 05:23:14 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:25:18.454 05:23:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.454 05:23:14 -- common/autotest_common.sh@10 -- # set +x 00:25:18.454 05:23:14 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:25:18.454 05:23:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.454 05:23:14 -- common/autotest_common.sh@10 -- # set +x 00:25:18.454 05:23:14 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt 00:25:18.454 05:23:14 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/udev.log ]] 00:25:18.454 05:23:14 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/udev.log 00:25:18.454 05:23:14 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:25:18.454 05:23:14 -- spdk/autotest.sh@383 -- # hostname 00:25:18.455 05:23:14 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_test.info 00:25:18.455 geninfo: WARNING: invalid characters removed from testname! 00:25:36.543 05:23:32 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:25:37.920 05:23:34 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:25:39.298 05:23:35 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:25:41.202 05:23:37 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:25:42.583 05:23:39 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:25:44.487 05:23:40 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:25:45.864 05:23:42 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:45.864 05:23:42 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:25:45.864 05:23:42 -- common/autotest_common.sh@1690 -- $ lcov --version 00:25:45.864 05:23:42 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:25:45.864 05:23:42 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:25:45.864 05:23:42 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:25:45.864 05:23:42 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:25:45.864 05:23:42 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:25:45.864 05:23:42 -- scripts/common.sh@335 -- $ IFS=.-: 00:25:45.864 05:23:42 -- scripts/common.sh@335 -- $ read -ra ver1 00:25:45.864 05:23:42 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:45.864 05:23:42 -- scripts/common.sh@336 -- $ read -ra ver2 00:25:45.864 05:23:42 -- scripts/common.sh@337 -- $ local 'op=<' 00:25:45.864 05:23:42 -- scripts/common.sh@339 -- $ ver1_l=2 00:25:45.864 05:23:42 -- scripts/common.sh@340 -- $ ver2_l=1 00:25:45.864 05:23:42 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:25:45.864 05:23:42 -- scripts/common.sh@343 -- $ case "$op" in 00:25:45.864 05:23:42 -- scripts/common.sh@344 -- $ : 1 00:25:45.864 05:23:42 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:25:45.864 05:23:42 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.864 05:23:42 -- scripts/common.sh@364 -- $ decimal 1 00:25:45.864 05:23:42 -- scripts/common.sh@352 -- $ local d=1 00:25:45.864 05:23:42 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:45.864 05:23:42 -- scripts/common.sh@354 -- $ echo 1 00:25:45.864 05:23:42 -- scripts/common.sh@364 -- $ ver1[v]=1 00:25:45.864 05:23:42 -- scripts/common.sh@365 -- $ decimal 2 00:25:45.864 05:23:42 -- scripts/common.sh@352 -- $ local d=2 00:25:45.864 05:23:42 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:45.864 05:23:42 -- scripts/common.sh@354 -- $ echo 2 00:25:45.864 05:23:42 -- scripts/common.sh@365 -- $ ver2[v]=2 00:25:45.864 05:23:42 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:25:45.864 05:23:42 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:25:45.864 05:23:42 -- scripts/common.sh@367 -- $ return 0 00:25:45.864 05:23:42 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.864 05:23:42 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:25:45.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.864 --rc genhtml_branch_coverage=1 00:25:45.864 --rc genhtml_function_coverage=1 00:25:45.864 --rc genhtml_legend=1 00:25:45.864 --rc geninfo_all_blocks=1 00:25:45.864 --rc geninfo_unexecuted_blocks=1 00:25:45.864 00:25:45.864 ' 00:25:45.864 05:23:42 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:25:45.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.864 --rc genhtml_branch_coverage=1 00:25:45.864 --rc genhtml_function_coverage=1 00:25:45.864 --rc genhtml_legend=1 00:25:45.864 --rc geninfo_all_blocks=1 00:25:45.864 --rc geninfo_unexecuted_blocks=1 00:25:45.864 00:25:45.864 ' 00:25:45.864 05:23:42 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:25:45.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.864 --rc genhtml_branch_coverage=1 00:25:45.864 --rc genhtml_function_coverage=1 00:25:45.864 --rc genhtml_legend=1 00:25:45.864 --rc geninfo_all_blocks=1 00:25:45.864 --rc geninfo_unexecuted_blocks=1 00:25:45.864 00:25:45.864 ' 00:25:45.864 05:23:42 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:25:45.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.864 --rc genhtml_branch_coverage=1 00:25:45.864 --rc genhtml_function_coverage=1 00:25:45.864 --rc genhtml_legend=1 00:25:45.864 --rc geninfo_all_blocks=1 00:25:45.864 --rc geninfo_unexecuted_blocks=1 00:25:45.864 00:25:45.864 ' 00:25:45.865 05:23:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:25:45.865 05:23:42 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:45.865 05:23:42 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.865 05:23:42 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.865 05:23:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.865 05:23:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.865 05:23:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.865 05:23:42 -- paths/export.sh@5 -- $ export PATH 00:25:45.865 05:23:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.865 05:23:42 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:25:45.865 05:23:42 -- common/autobuild_common.sh@440 -- $ date +%s 00:25:45.865 05:23:42 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732076622.XXXXXX 00:25:45.865 05:23:42 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732076622.VHNwOv 00:25:45.865 05:23:42 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:25:45.865 05:23:42 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:25:45.865 05:23:42 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/' 00:25:45.865 05:23:42 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:45.865 05:23:42 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:45.865 05:23:42 -- common/autobuild_common.sh@456 -- $ get_config_params 00:25:45.865 05:23:42 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:25:45.865 05:23:42 -- common/autotest_common.sh@10 -- $ set +x 00:25:45.865 05:23:42 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:25:45.865 05:23:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:25:45.865 05:23:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:25:45.865 05:23:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:45.865 05:23:42 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:25:45.865 05:23:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:45.865 05:23:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:45.865 05:23:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:45.865 05:23:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:45.865 05:23:42 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt 00:25:45.865 05:23:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:45.865 + [[ -n 9247 ]] 00:25:45.865 + sudo kill 9247 00:25:45.874 [Pipeline] } 00:25:45.889 [Pipeline] // stage 00:25:45.893 [Pipeline] } 00:25:45.907 [Pipeline] // timeout 00:25:45.911 [Pipeline] } 00:25:45.925 [Pipeline] // catchError 00:25:45.929 [Pipeline] } 00:25:45.952 [Pipeline] // wrap 00:25:45.958 [Pipeline] } 00:25:45.971 [Pipeline] // catchError 00:25:45.981 [Pipeline] stage 00:25:45.983 [Pipeline] { (Epilogue) 00:25:45.997 [Pipeline] catchError 00:25:45.999 [Pipeline] { 00:25:46.011 [Pipeline] echo 00:25:46.013 Cleanup processes 00:25:46.019 [Pipeline] sh 00:25:46.305 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:25:46.305 398389 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:25:46.319 [Pipeline] sh 00:25:46.605 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:25:46.605 ++ grep -v 'sudo pgrep' 00:25:46.605 ++ awk '{print $1}' 00:25:46.605 + sudo kill -9 00:25:46.605 + true 00:25:46.618 [Pipeline] sh 00:25:46.901 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:56.890 [Pipeline] sh 00:25:57.176 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:57.176 Artifacts sizes are good 00:25:57.190 [Pipeline] archiveArtifacts 00:25:57.197 Archiving artifacts 00:25:57.509 [Pipeline] sh 00:25:57.797 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:25:57.812 [Pipeline] cleanWs 00:25:57.822 [WS-CLEANUP] Deleting project workspace... 00:25:57.823 [WS-CLEANUP] Deferred wipeout is used... 00:25:57.829 [WS-CLEANUP] done 00:25:57.831 [Pipeline] } 00:25:57.852 [Pipeline] // catchError 00:25:57.866 [Pipeline] sh 00:25:58.151 + logger -p user.info -t JENKINS-CI 00:25:58.160 [Pipeline] } 00:25:58.174 [Pipeline] // stage 00:25:58.180 [Pipeline] } 00:25:58.195 [Pipeline] // node 00:25:58.200 [Pipeline] End of Pipeline 00:25:58.239 Finished: SUCCESS